Hello. I expect you won’t like me because I’m Christian and female and don’t want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should. I’ve been lurking for a long time. The first time I found this place I followed a link to OvercomingBias from AnneC’s blog and from there, without quite realizing it, found myself archive-binging and following another link here. But then I stopped and left and then later I got linked to the Sequences from Harry Potter and the Methods of Rationality.
A combination of the whole evaporative cooling thing and looking at an old post that wondered why there weren’t more women convinced me to join. You guys are attracting a really narrow demographic and I was starting to wonder whether you were just going to turn into a cult and I should ignore you.
...And I figure I can still leave if that ends up happening, but if everyone followed the logic I just espoused, it’ll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world. I’d rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don’t agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Okay, ready to be shouted down. I’ll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I’ll probably just leave soon anyway. Nothing good can come of this. I don’t know why I’m doing this. I shouldn’t be here; you don’t want me here, not to mention I probably shouldn’t bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
Wow. Some of your other posts are intelligent, but this is pure troll-bait.
EDIT: I suppose I should share my reasoning. Copied from my other post lower down the thread:
Hello, I expect you won’t like me, I’m
Classic troll opening. Challenges us to take the post seriously. Our collective ‘manhood’ is threatened if react normally (eg saying “trolls fuck off”).
dont want to be turned onto an immortal computer-brain-thing that acts more like Eliezer thinks it should
Insulting straw man with a side of “you are an irrational cult”.
I’ve been lurking for a long time… overcoming bias… sequences… HP:MOR… namedropping
“Seriously, I’m one of you guys”. Concern troll disclaimer. Classic.
evaporative cooling… women… I’m here to help you not be a cult.
Again undertones of “you are a cult and you must accept my medicine or turn into a cult”. Again we are challenged to take it seriously.
I just espoused, it’ll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world.
I didn’t quite understand this part, but again, straw man caricature.
I’d rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don’t agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Theres a rhetorical meme on 4chan that elegantly deals with this kind of crap:
implying we don’t care about friendliness implying you know more about friendliness than EY
’nuff said
Okay, ready to be shouted down. I’ll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all.
classic reddit downvote preventer:
Post a troll or other worthless opinion
Imply that the hivemind wont like it
Appeal to people’s fear of hivemind
Collect upvotes.
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
again implying irrational insider/outsider dynamic, hivemind tendencies and even censorship.
Of course the kneejerk response is “no no, we don’t hate you and we certainly won’t censor you; please we want more christian trolls like you”. EDIT: Ha! well predicted I say. I just looked at the other 500 responses. /EDIT
I’ll probably just leave soon anyway. Nothing good can come of this. I don’t know why I’m doing this. I shouldn’t be here; you don’t want me here, not to mention I probably shouldn’t bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
And top it off with a bit of sympathetic character, damsel-in-distress crap. EDIT: Oh and the bit about hating God is a staw-man. /EDIT
This is not necessarily deliberate, but it doesn’t have to be.
Trolling is a art. and Aspiring_Knitter is a artist. 10⁄10.
You’ve got an interesting angle there, but I don’t think AspiringKnitter is a troll in the pernicious sense—her post has led to a long reasonable discussion that she’s made a significant contribution to.
I do think she wanted attention, and her post had more than a few hooks to get it. However, I don’t think it’s useful to describe trolls as “just wanting attention”. People post because they want attention. The important thing is whether they repay attention with anything valuable.
I don’t have the timeline completely straight, but it looks to me like AspiringKnitter came in trolling and quickly changed gears to semi-intelligent discussion. Such things happen. AspiringKnitter is no longer a troll, that’s for sure; like you say “her post has led to a long reasonable discussion that she’s made a significant contribution to”.
All that, however, does not change the fact that this particular post looks, walks, and quacks like troll-bait and should be treated as such. I try to stay out of the habit of judging posts on the quality of the poster’s other stuff.
Thanks for letting me know. If most people disagree with my assessment, I’ll adjust my troll-resistance threshold.
I just want to make sure we don’t end up tolerating people who appear to have trollish intent. AspiringKnitter turned out to be positive, but I still think that particular post needed to be called out.
You’re welcome. This makes me glad I didn’t come out swinging—I’d suspected (actually I had to resist the temptation to obsess about the idea) that you were a troll yourself.
If you don’t mind writing about it, what sort of places have you been hanging out that you got your troll sensitivity calibrated so high? I’m phrasing it as “what sort of places” in case you’d rather not name particular websites.
what sort of places have you been hanging out that you got your troll sensitivity calibrated so high?
4chan, where there is an interesting dynamic around trolling and getting trolled. Getting trolled is low-status, calling out trolls correctly that no-one else caught is high-status, and trolling itself is god-status, calling troll incorrectly is low status like getting trolled. With that culture, the art of trolling, counter-trolling and troll detection gets well trained.
I learned a lot of trolling theory from reddit, (like the downvote preventer and concern trolling). The politics, anarchist, feminist and religious subreddits have a lot of good cases to study (they generally suck at managing community, tho).
I learned a lot of relevant philosophy of trolling and some more theory from /i/nsurgency boards and wikis (start at partyvan.info). Those communities are in a sorry state these days.
Alot of what I learned on 4chan and /i/ is not common knowledge around here and could be potentially useful. Maybe I’ll beat some of it into a useful form and post it.
Maybe I’ll beat some of it into a useful form and post it.
For one thing, the label “trolling” seems like it distracts more than it adds, just like “dark arts.” AspiringKnitter’s first post was loaded with influence techniques, as you point out, but it’s not clear to me that pointing at influence techniques and saying “influence bad!” is valuable, especially in an introduction thread. I mean, what’s the point of understanding human interaction if you use that understanding to botch your interactions?
There is a clear benefit to pointing out when a mass of other people are falling for influence techniques in a way you consider undesirable.
It is certainly worth pointing out the techniques, especially since it looks like not everyone noticed them. What’s not clear to me is the desirability of labeling it as “bad,” which is how charges of trolling are typically interpreted.
Easiest first: I introduced “dark arts” as an example of a label that distracted more than it added. It wasn’t meant as a reference to or description of your posts.
In your previous comment, you asked the wrong question (‘were they attempting to persuade?’) and then managed to come up with the wrong answer (‘nope’). Both of those were disappointing (the first more so) especially in light of your desire to spread your experience.
The persuasion was “please respond to me nicely.” It was richly rewarded: 20 welcoming responses (when most newbies get 0 or 1), and the first unwelcoming response got downvoted quickly.
The right question is, what are our values, here? When someone expressing a desire to be welcomed uses influence techniques that further that end, should we flip the table over in disgust that they tried to influence us? That’ll show them that we’re savvy customers that can’t be trolled! Or should we welcome them because we want the community to grow? That’ll show them that we’re worth sticking around.
I will note that I upvoted this post, because in the version that I saw it started off with “Some of your other posts are intelligent” and then showed many of the tricks AspiringKnitter’s post used. Where I disagree with you is the implication that we should have rebuked her for trolling. The potential upsides of treating someone with charity and warmth is far greater than the potential downsides of humoring a troll for a few posts.
That’s interesting—I’ve never hung out anywhere that trolling was high status.
In reddit and the like, how is consensus built around whether someone is a troll and/or is trolling in a particular case?
I think I understand concern trolling, which I understand to be giving advice which actually weakens the receiver’s position, though I think the coinage “hlep” from Making Light is more widely useful—inappropriate, annoying/infuriating advice which is intended to be helpful but doesn’t have enough thought behind it, but what’s downvote preventer?
Hlep has a lot of overlap with other-optimizing.
I’d be interested in what you have to say about the interactions at 4chan and /i/, especially about breakdowns in political communities.
I’ve been mulling the question of how you identify and maintain good will—to my mind, a lot of community breakdown is caused by tendencies to amplify disagreements between people who didn’t start out being all that angry at each other.
In reddit and the like, how is consensus built around whether someone is a troll and/or is trolling in a particular case?
On reddit there is just upvotes and downvotes. Reddit doesn’t have developed social mechanisms for dealing with trolls, because the downvotes work most of the time. Developing troll technology like the concern troll and the downvote preventer to hack the hivemind/vote dynamic is the only way to succeed.
4chan doesn’t have any social mechanisms either, just the culture. Communication is unnecessary for social/cultural pressure to work, interestingly. Once the countertroll/troll/troll-detector/trolled/troll-crier hierarchy is formed by the memes and mythology, the rest just works in your own mind. “fuck I got trolled, better watch out better next time”, “all these people are getting trolled, but I know the OP is a troll; I’m better than them” “successful troll is successful” “I trolled the troll”. Even if you don’t post them and no-one reacts to them, those thoughts activate the social shame/status/etc machinery.
I think I understand concern trolling, which I understand to be giving advice which actually weakens the receiver’s position, though I think the coinage “hlep” from is more widely useful
Not quite. A concern troll is someone who comes in saying “I’m a member of your group, but I’m unsure about this particular point in a highly controversial way” with the intention of starting a big useless flame-war.
Havn’t heard of hlep. seems interesting.
but what’s downvote preventer
The downvote preventer is when you say “I know the hivemind will downvote me for this, but...” It creates association in the readers mind between downvoting and being a hivemind drone, which people are afraid of, so they don’t downvote. It’s one of the techniques trolls use to protect the payload, like the way the concern troll used community membership.
I’ve been mulling the question of how you identify and maintain good will—to my mind, a lot of community breakdown is caused by tendencies to amplify disagreements between people who didn’t start out being all that angry at each other.
Yes. A big part of trolling is actually creating and fueling those disagreements. COINTELPRO trolling is disrupting peoples ability to identify trolls and goodwill. There is a lot of depth and difficulty to that.
Wow, I don’t post over Christmas and look what happens. Easiest one to answer first.
Wow, thanks!
You’re a little mean.
You don’t need an explanation of 2, but let me go through your post and explain about 1.
Classic troll opening. Challenges us to take the post seriously. Our collective ‘manhood’ is threatened if react normally (eg saying “trolls fuck off”).
Huh. I guess I could have come up with that explanation if I’d thought. The truth here is that I was just thinking “you know, they really won’t like me, this is stupid, but if I make them go into this interaction with their eyes wide open about what I am, and phrase it like so, I might get people to be nice and listen”.
dont want to be turned onto an immortal computer-brain-thing that acts more like Eliezer thinks it should
Insulting straw man with a side of “you are an irrational cult”.
That was quite sincere and I still feel that that’s a worry.
Also, I don’t think I know more about friendliness than EY. I think he’s very knowledgeable. I worry that he has the wrong values so his utopia would not be fun for me.
classic reddit downvote preventer:
Post a troll or other worthless opinion
Imply that the hivemind wont like it
Appeal to people’s fear of hivemind
Collect upvotes.
Wow, you’re impressive. (Actually, from later posts, I know where you get this stuff from. I guess anyone could hang around 4chan long enough to know stuff like that if they had nerves of steel.) I had the intuition that this will lead to fewer downvotes (but note that I didn’t lie; I did expect that it was true, from many theist-unfriendly posts on this site), but I didn’t think consciously this procedure will appeal to people’s fear of the hivemind to shame them into upvoting me. I want to thank you for pointing that out. Knowing how and why that intuition was correct will allow me to decide with eyes wide open whether to do something like that in the future, and if I ever actually want to troll, I’ll be better at it.
And top it off with a bit of sympathetic character, damsel-in-distress crap.
Actually, I just really need to learn to remember that while I’m posting, proper procedure is not “allow internal monologue to continue as normal and transcribe it”. You have no idea how much trouble that’s gotten me into. (Go ahead and judge me for my self-pitying internal monologue if you want. Rereading it, I’m wondering how I failed to notice that I should just delete that part, or possibly the whole post.) On the other hand, I’d certainly hope that being honest makes me a sympathetic character. I’d like to be sympathetic, after all. ;)
This is not necessarily deliberate, but it doesn’t have to be.
Thank you. It wasn’t, but as you say, it doesn’t have to be. I hope I’ll be more mindful in the future, and bear morality in mind in crafting my posts here and elsewhere. I would never have seen these things so clearly for myself.
10⁄10.
Thanks, but no. LOL.
I’d upvote you, but otherwise your post is just so rude that I don’t think I will.
Thank you. I was complaining about his use of needless profanity to refer to what I said, and a general “I’m better than you” tone (understandable, if he comes from a place where catching trolls is high status, but still rude). I not only approve of being told that I’ve done something wrong, I actually thanked him for it. Crocker’s rules don’t say “explain things in an insulting way”, they say “don’t soften the truths you speak to me”. You can optimize for information—and even get it across better—when you’re not trying to be rude. For instance,
And top it off with a bit of sympathetic character, damsel-in-distress crap.
That would not convey less truth if it weren’t vulgar. You can easily communicate that someone is tugging people’s heartstrings by presenting as a highly sympathetic damsel in distress without being vulgar.
Also, stuff like this:
Ha! well predicted I say. I just looked at the other 500 responses.
That makes it quite clear that nyan_sandwich is getting a high from this and feels high-status because of behavior like this. While that in itself is fine, the whole post does have the feel of gloating to it. I simultaneously want to upvote it for information and downvote it for lowering the overall level of civility.
Here’s my attempt to clarify how I wish to be engaged with: convey whatever information you feel is true. Be as reluctant to actively insult me as you would anyone else, bearing in mind that a simple “this is incorrect” is not insulting to me, and nor is “you’re being manipulative”. “This is crap” always lowers the standard of debate. If you spell out what’s crappy about it, your readers (including yours truly) can grasp for themselves that it’s crap.
Of course, if nyan_sandwich just came from 4chan, we can congratulate him on being an infinitely better human being than everyone else he hangs out with, as well as on saying something that isn’t 100% insulting, vulgar nonsense. (I’d say less than 5% insulting, vulgar nonsense.) Actually, his usual contexts considered, I may upvote him after all. I know what it takes to be more polite than you’re used to others being.
Thus, one who has committed to these rules largely gives up the right to complain about emotional provocation, flaming, abuse and other violations of etiquette
There’s a decision theoretic angle here. If I declare Crocker’s rules, and person X calls me a filthy anteater, then I might not care about getting valuable information from them (they probably don’t have any to share) but I refrain from lashing out anyway! Because I care about the signal I send to person Y who is still deciding whether to engage with me, who might have a sensitive detector of Crocker’s rules violations. And such thoughtful folks may offer the most valuable critique. I’m afraid you might have shot yourself in the foot here.
I think this is generally correct. I do wonder about a few points:
If I am operating on Crocker’s Rules (I personally am not, mind, but hypothetically), and someone’s attempt to convey information to me has obvious room for improvement, is it ever permissible for me to let them know this? Given your decision theory point, my guess would be “yes, politely and privately,” but I’m curious as to what others think as well. As a side note, I presume that if the other person is also operating by Crocker’s Rules, you can say whatever you like back.
someone’s attempt to convey information to me has obvious room for improvement
Do you mean improvement of the information content or the tone? If the former, I think saying “your comment was not informative enough, please explain more” is okay, both publicly and privately. If the latter, I think saying “your comment was not polite enough” is not okay under the spirit of Crocker’s rules, neither publicly nor privately, even if the other person has declared Crocker’s rules too.
When these things are orthogonal, I think your interpretation is clear, and when information would be obscured by politeness the information should win—that’s the point of Crocker’s Rules. What about when information is obscured by deliberate impoliteness? Does the prohibition on criticizing impoliteness win, or the permit for criticizing lack of clarity? In any case, if the other person is not themselves operating by Crocker’s Rules, it is of course important that your response be polite, whatever it is.
Question: do Crocker’s rules work differently here than I’m used to? I’m used to a communication style where people say things to get the point across, even though such things would be considered rude in typical society, not for being insulting but for pointless reasons, and we didn’t do pointless things just to be typical. We were bluntly honest with each other, even (actually especially) when people were wrong (after all, it was kind of important that we convey that information accurately, completely and as quickly as possible in some cases), but to be deliberately insulting when information could have been just as easily conveyed some other way (as opposed to when it couldn’t be), or to be insulting without adding any useful information at all, was quite gauche. At one point someone mentioned that if we wanted to invoke that in normal society, say we were under Crocker’s rules.
So it looks like the possibilities worth considering are:
Someone LIED just to make it harder for us to fit in with normal society!
Someone was just wrong.
You’re wrong.
Crockering means different things to different people.
Baiting and switching by declaring Crocker’s rules then shaming and condescending when they do not meet your standard of politeness could legitimately be considered a manipulative social ploy.
I didn’t consider Crocker’s rules at all when reading nyan’s comment and it still didn’t seem at all inappropriate. You being outraged at the ‘vulgarity’ of the phrase “damsel in distress crap” is a problem with your excess sensitivity and not with the phrase. As far as I’m concerned “damsel in distress crap” is positively gentle. I would have used “martyrdom bullshit” (but then I also use bullshit as a technical term).
Crocker’s rules is about how people speak to you. But for all it is a reply about your comment nyan wasn’t even talking to you. He was talking to the lesswrong readers warning them about perceived traps they are falling into when engaging with your comment.
Like it or not people tend to reciprocate disrespect with disrespect. While you kept your comment superficially civil and didn’t use the word ‘crap’ you did essentially call everyone here a bunch of sexist Christian hating bullies. Why would you expect people to be nice to you when you treat them like that?
The impression I have is that calling Crocker’s rules being never acting offended or angry at the way people talk to you, with the expectation that you’ll get more information if people don’t censor themselves out of politeness.
Some of your reactions here are not those I expect from someone under Crocker’s rules (who would just ignore anything insulting or offensive).
So maybe what you consider as “Crocker’s rules” is what most people here would consider “normal” discussion, so when you call Crocker’s rules, people are extra rude.
I would suggest just dropping reference to Crocker’s rules, I don’t think they’re necessary for having a reasonable discussion, and they they put pressure on the people you’re talking to to either call Crocker’s rules too (giving you carte blanche to be rude to them), otherwise they look uptight or something.
So maybe what you consider as “Crocker’s rules” is what most people here would consider “normal” discussion, so when you call Crocker’s rules, people are extra rude.
Possible. I’m inexperienced in talking with neurotypicals. All I know is what was drilled into me by them, which is basically a bunch of things of the form “don’t ever convey this piece of information because it’s rude” (where the piece of information is like… you have hairy arms, you’re wrong, I don’t like this food, I don’t enjoy spending time with you, this gift was not optimized for making me happy—and the really awful, horrible dark side where they feel pressured never to say certain things to me, like that I’m wrong, they’re annoyed by something I’m doing, I’m ugly, I sound stupid, my writing needs improvement—it’s horrible to deal with people who never say those things because I can never assume sincerity, I just have to assume they’re lying all the time) that upon meeting other neurodiverse I immediately proceeded to forget all about. And so did they. And THAT works out well. It’s accepted within that community that “Crocker’s rules” is how the rest of the world will refer to it.
Anyway, if I’m not allowed to hear the truth without having to listen to whatever insults anyone can come up with, then so be it, I really want to hear the truth and I know it will never be given to me otherwise. But there IS supposed to be something between “you are not allowed to say anything to me except that I’m right about everything and the most wonderful special snowflake ever” and “insult me in every way you can think of”, even if the latter is still preferable to the former. (Is this community a place with a middle ground? If so, I didn’t think such existed. If so, I’ll gladly go by the normal rules of discussion here.)
the baseline interaction mode would be considered rude-but-not-insulting by most American subcultures, especially neurotypical ones
the interaction mode invoked by “Crocker’s rules” would be considered insulting by most American subcultures, especially neurotypical ones
there’s considerable heterogeneity in terms of what’s considered unacceptably rude
there’s a tentative consensus that dealing with occasional unacceptable rudeness is preferable to the consequences of disallowing occasional unacceptable rudeness, and
the community pushes back on perceived attempts to enforce politeness far more strongly than it pushes back on perceived rudeness.
Dunno if any of that answers your questions.
I would also say that nobody here has come even remotely close to “insult in every conceivable way” as an operating mode.
the baseline interaction mode would be considered rude-but-not-insulting by most American subcultures, especially neurotypical ones
the community pushes back on perceived attempts to enforce politeness far more strongly than it pushes back on perceived rudeness.
YES!
There seem to be a lot of new people introducing themselves on the Welcome thread today/yesterday. I would like to encourage everyone to maybe be just a tad bit more polite, and cognizant of the Principle of Charity, at least for the next week or two, so all our newcomers can acclimate to the culture here.
As someone who has only been on this site for a month or two (also as a NT, socially-skilled, female), I have spoken in the past about my difficulties dealing with the harshness here. I ended up deciding not to fight it, since people seem to like it that way, and that’s ok. But I do think the community needs to be aware that this IS in fact an issue that new (especially NT) people are likely to shy away from, and even leave or just not post because of.
tl;dr- I deal with the “rudeness”, but want people to be aware that is does in fact exist. Those of us who dislike it have just learned to keep our mouths shut and deal with it. There are a lot of new people now, so try to soften it for the next week or two.
(Note: I have not been recently down-voted, flamed, or crushed, so this isn’t just me raging.)
I’m unlikely to change my style of presentation here as a consequence of new people arriving, especially since I find it unlikely that the wave of introductions reflects an actual influx of new people, as opposed to an influx of activity on the Welcome threads making the threads more visible and inspiring introductions.
If my presentation style is offputting to new people who prefer a different style, I agree that’s unfortunate. I’m not sure that my dealing by changing my style for their benefit—supposing they even benefit from it—is better.
You are correct, in that I do believe that many of the introductions here are people who have been lurking a long time, but are following the principle of social proof, and just introducing themselves now that everyone else is.
However, I do think that once they have gone through the motions of setting up an account an publishing their introduction, that self-consistency will lead them to continue to be more active on this site; They have just changed their self-image to that of “Member of LW” after all!
Your other supposition- that they might not benefit from it… I will tell you that I have almost quit LW many times in the past month, and it is only a lack of anything better out there that has kept me here.
My assumption is that you are OK with this, and feel that people that can’t handle the heat should get out of the kitchen anyway, so to speak.
I think that is a valid point, IFF you want to maintain LW as it currently stands. I will admit that my preferences are different in that I hope LW grows and gets more and more participants. I also hope that this growth causes LW to be more “inclusive” and have a higher percentage of females (gender stereotyping here, sorry) and NTs, which will in effect lower the harshness of the site.
So I think our disagreement doesn’t stem from “bad” rationality on either of our parts. It’s just that we have different end-goals.
I’m sorry, I did not want to imply that you specifically made me want to quit. In all honesty, the lack of visual avatars means I can’t keep LW users straight at all.
But since you seem to be asking about your presentation style, here is me re-writing your previous post in a way that is optimized for a conversation I would enjoy, without feeling discomfort.
Original:
I’m unlikely to change my style of presentation here as a consequence of new people arriving, especially since I find it unlikely that the wave of introductions reflects an actual influx of new people, as opposed to an influx of activity on the Welcome threads making the threads more visible and inspiring introductions.
If my presentation style is offputting to new people who prefer a different style, I agree that’s unfortunate. I’m not sure that my dealing by changing my style for their benefit—supposing they even benefit from it—is better.
How I WISH LW operated (and realize that 95% of you do not wish this)
I agree that it’s unfortunate that the style of LW posts may drive new users away, especially if they would otherwise enjoy the site and become valuable participants. However, I don’t plan on updating my personal writing style here.
My main reason for this is that I find it unlikely that the wave of introductions reflects an actual influx of new people, as opposed to an influx of activity on the Welcome threads making the threads more visible and inspiring introductions.
I am also unsure if changing my writing style would actually help these newcomers in the long run. Or even if it did, would I prefer a LW that is watered-down, but more accessible? (my interpretation of what you meant by “better”)
I asked about my presentation style because that’s what I wrote about in the first place, and I couldn’t tell whether your response to my comment was actually a response to what I wrote, or some more general response to some more general thing that you decided to treat my comment as a standin for.
I infer from your clarification that i was the latter. I appreciate the clarification.
Your suggested revision of what I said would include several falsehoods, were I to have said it.
Your suggested revision of what I said would include several falsehoods, were I to have said it.
I had to fill in some interpretations of what I thought you could have meant. If what I filled in was false, it is just that I do not know your mind as well as you do. If I did, I could fill in things that were true.
Politeness does not necessarily require falsity. Your post lacked the politeness parts, so I had to fill in politeness parts that I thought sounded like reasonable things you might be thinking. Were you trying to be polite, you could fill in politeness parts with things that were actually true for you (and not just my best guesses.)
I infer from your explanation that your version of politeness does require that I reveal more information than I initially revealed. Can you say more about why?
How do I insult thee? Let me count the ways. I insult thee to the depth and breadth and height My mind can reach, when feeling out of sight For the lack of Reason and the craft of Bayes.
I must confess, I have never actually heard the words ‘gyre’ and ‘falconer’. I assumed they could be pronounced in such a way that it would sound like a rhyme. In my head, they both were pronounced like ‘hear’. Likewise, I assumed one could pronounce ‘world’ and ‘hold’ in such a way that they could sort-of rhyme. In my head, ‘hold’ was pronounced ‘held’ and ‘world’ was pronounced ‘weld.’
Returning to this… if you’re still tempted, I’d love to see your take on it. Feel free to use me as a target if that helps your creativity, though I’m highly unlikely to take anything you say in this mode seriously. (That said, using a hypothetical third party would likely be emotionally easier.)
Unrelatedly: were you the person who had the script that sorts and display’s all of a user’s comments? I’ve changed computers since being handed that pointer and seem to have misplaced the pointer.
[T]hey put pressure on the people you’re talking to to either call Crocker’s rules too (giving you carte blanche to be rude to them), otherwise they look uptight or something.
This should be strongly rejected, if Crocker’s Rules are ever going to do more good than harm. I do not mean that it is not the case given existing norms (I simply do not know one way or the other), but that norms should be established such that this is clearly not the case. Someone who is unable to operate according to Crocker’s Rules attempting to does not improve discourse or information flow—no one should be pressured to do so.
The problem is, the more a community is likely to consider X a “good” practice, the more it is likely to think less of those who refuse to do do X, whatever X is; so I don’t see a good way of avoiding negative connotations to “unable to operate according to Crocker’s Rules”.
… that is, unless the interaction is not symmetric, so that when one side announces Crocker’s rules, there is no implicit expectation that the other side should do the same (with the associated status threat); for example if on my website I mention Crocker’s rules next to the email form or something.
But in a peer-to-peer community like this, that expectation is always going to be implicit, and I don’t see a good way to make it disappear.
As I’ve mentioned before, I am not operating by Crocker’s rules. I try to be responsible for my emotional state, but realize that I’m not perfect at this, so tell me the truth but there’s no need to be a dick about it. I am not unlikely, in the future, to declare Crocker’s rules with respect to some specific individuals and domains, but globally is unlikely in the foreseeable future.
Here’s my part too: I don’t declare Crocker’s rules and do not commit to paying any heed to whether others have declared Crocker’s rules. I’ll speak to people however I see fit—which will include taking into account the preferences of both the recipient and any onlookers to precisely the degree that seems appropriate or desirable at the time.
I don’t know about getting rid of it entirely, but we can at least help by stressing the importance of the distinction, and choosing to view operation by Crocker’s rules as rare, difficult, unrelated to any particular discussion, and of only minor status boost.
Another approach might be to make all Crocker communication private, and expect polite (enough) discourse publicly.
The underlying assumption is that rudeness is sometimes necessary for effective conveyance of information, if only to signal a lack of patience or tolerance: after all, knowing whether the speaker is becoming angry or despondent is useful rational evidence.
Looking hard for another source, something called the DoWire Wiki has this unsourced:
By invoking these Rules, the recipient declares that s/he does not care about, and some hold that s/he gives up all right to complain about and must require others not to complain about, any level of emotional provocation, flames, abuse of any kind.
So if anyone is using Crocker’s Rules a different way, I think it’s safe to say they’re doing it wrong, but only by definition. Maybe someone should ask Crocker, if they’re concerned.
OK. FWIW, I agree that nyan-sandwich’s tone was condescending, and that they used vulgar words. I also think “I suppose they can’t be expected to behave any better, we should praise them for not being completely awful” is about as condescending as anything else that’s been said in this thread.
Yeah, you’re probably right. I didn’t mean for that to come out that way (when I used to spend a lot of time on places with low standards, my standards were lowered, too), but that did end up insulting. I’m sorry, nyan_sandwich.
Crocker’s rules don’t say “explain things in an insulting way”, they say “don’t soften the truths you speak to me”. You can optimize for information—and even get it across better—when you’re not trying to be rude.
A lot of intelligent folks have to spend a lot of energy trying not to be rude, and part of the point of Crocker’s Rules is to remove that burden by saying you won’t call them on rudeness.
Not all politeness is inconsistent with communicating truth. I agree that “Does this dress make me look fat” has a true answer and a polite answer. It’s worth investing some attention into figuring out which answer to give. Often, people use questions like that as a trap, as mean-spirited or petty social and emotional manipulation. Crocker’s Rule is best understood as a promise that the speaker is aware of this dynamic and explicitly denies engaging in it.
That doesn’t license being rude. If you are really trying to help someone else come to a better understanding of the world, being polite helps them avoid cognitive biases that would prevent them from thinking logically about your assertions. In short, Crocker’s Rule does not mean “I don’t mind if you are intentionally rude to me.” It means “I am aware that your assertions might be unintentionally rude, and I will be guided by your intention to inform rather than interpreting you as intentionally rude.
In short, Crocker’s Rule does not mean “I don’t mind if you are intentionally rude to me.” It means “I am aware that your assertions might be unintentionally rude, and I will be guided by your intention to inform rather than interpreting you as intentionally rude.
Right, I wasn’t saying anything that contradicted that. Rather, some of us have additional cognitive burden in general trying to figure out if something is supposed to be rude, and I always understood part of the point of Crocker’s Rules to be removing that burden so we can communicate more efficiently. Especially since many such people are often worth listening to.
For what it’s worth, I generally see some variant of “please don’t flame me” attached only to posts which I’d call inoffensive even without it. I’m not crazy about seeing “please don’t flame me”, but I write it off to nervousness and don’t blame people for using it.
Caveat: I’m pretty sure that “please don’t flame me” won’t work in social justice venues.
I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.
I disagree. It’s an honest expression of feeling, and a reasonable statement of expectations, given LW’s other run-ins with self-identified theists. It may be a bit overstated, but not terribly much.
Do you really think it’s only a bit overstated? I mean, has anybody been banned for being religious? And has anybody here indicated that they hate Christians without immediately being called on falling into blue vs. green thinking?
Okay, ready to be shouted down. I’ll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I’ll probably just leave soon anyway. Nothing good can come of this. I don’t know why I’m doing this. I shouldn’t be here; you don’t want me here, not to mention I probably shouldn’t bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
From her other posts, AspiringKnitter strikes me as being open-minded and quite intelligent, but that last paragraph really irks me. It’s self-debasing in an almost manipulative way—as if she actually wants us to talk to her like we “only want [her] to hate God” or as if we “really hate Christians”. Anybody who has spent any non-trivial amount of time on LW would know that we certainly don’t hate people we disagree with, at least to the best of my knowledge, so asserting that is not a charitable or reasonable expectation. Plus, it seems that it would now be hard(er) to downvote her because she specifically said she expects that, even given a legitimate reason to downvote.
Well, some of Eliezer’s posts about religion and religious thought have been more than a little harsh. (I couldn’t find it, but there was a post where he said something along the lines of “I have written about religion as the largest imaginable plague on thinking...”) They didn’t explicitly say that religious people are to be scorned, but it’s very easy to read in that implication, especially since many people who are equally vocal about religion being bad do hold that opinion.
Being honest and having reasonable expectations of being treated like a troll does not disqualify a post from being a troll.
Hello, I expect you won’t like me, I’m
Classic troll opening. Challenges us to take the post seriously. Our collective ‘manhood’ is threatened if react normally (eg saying “trolls fuck off”).
dont want to be turned onto an immortal computer-brain-thing that acts more like Eliezer thinks it should
Insulting straw man with a side of “you are an irrational cult”.
I’ve been lurking for a long time… overcoming bias… sequences… HP:MOR… namedropping
“Seriously, I’m one of you guys”. Concern troll disclaimer. Classic.
evaporative cooling… women… I’m here to help you not be a cult.
Again undertones of “you are a cult and you must accept my medicine or turn into a cult”. Again we are challenged to take it seriously.
I just espoused, it’ll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world.
I didn’t quite understand this part, but again, straw man caricature.
I’d rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don’t agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Theres a rhetorical meme on 4chan that elegantly deals with this kind of crap:
implying we don’t care about friendliness implying you know more about friendliness than EY
’nuff said
Okay, ready to be shouted down. I’ll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all.
classic reddit downvote preventer:
Post a troll or other worthless opinion
Imply that the hivemind wont like it
Appeal to people’s fear of hivemind
Collect upvotes.
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
again implying irrational insider/outsider dynamic, hivemind tendencies and even censorship.
Of course the kneejerk response is “no no, we don’t hate you and we certainly won’t censor you; please we want more christian trolls like you”
I’ll probably just leave soon anyway. Nothing good can come of this. I don’t know why I’m doing this. I shouldn’t be here; you don’t want me here, not to mention I probably shouldn’t bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
And top it off with a bit of sympathetic character, damsel-in-distress crap. EDIT: Oh and the bit about hating God is a staw-man. /EDIT
This is not necessarily deliberate, but it doesn’t have to be.
Trolling is a art. and Aspiring_Knitter is a artist. 10⁄10.
I’ve been lurking for a long time… overcoming bias… sequences… HP:MOR… namedropping
“Seriously, I’m one of you guys”. Concern troll disclaimer. Classic.
I don’t follow how indicating that she’s actually read the site can be a mark against her. If the comment had not indicated familiarity with the site content, would you then describe it as less trollish?
it’s a classic troll technique. It’s not independent of the other trollish tendencies. Alone, saying those things does not imply troll, but in the presence of other troll-content it is used to raise perceived standing and lower the probability that they are a troll.
EDIT: and yes, trollish opinions without trollish disclaimers raise probability of plain old stupidity.
EDIT2: Have to be very careful with understanding the causality of evidence supplied by hostile agents. What Evidence Filtered Evidence and so on,
So… voicing disagreement boldly is trolling, voicing it nervously is trolling and trying to prevent being called out. Signalling distance from the group is trolling and accusations of hive mind, signalling group membership is trolling and going “Seriously, I’m one of you guys”. Joking about the image a group idea’s have, in the same way the group itself does, is straw-manning and caricature, seriously worrying about those ideas is damsel-in-distress crap.
Okay, so I see the bits that are protection against being called a troll. What I don’t see is the trolling. Is it “I’m a Christian”? If you think all Christians should pretend to be atheists… well, 500 responses disagree with you. Is it what you call straw men? I read those as jokes about what we look like to outsiders, but even if they’re sincere, they’re surrounded with so much display of uncertainty that “No, that’s not what we think.” should end it then and there. And if AspiringKnitter where a troll, why would she stop trolling and write good posts right after that?
Conclusion: You fail the principle of charity forever. You’re a jerk. I hope you run out of milk next time you want to eat cereal.
So… voicing disagreement boldly is trolling, voicing it nervously is trolling and trying to prevent being called out. Signalling distance from the group is trolling and accusations of hive mind, signalling group membership is trolling and going “Seriously, I’m one of you guys”. Joking about the image a group idea’s have, in the same way the group itself does, is straw-manning and caricature, seriously worrying about those ideas is damsel-in-distress crap.
Deliberate, active straw manning sarcasm for the purpose of giving insult and conveying contempt.
What I don’t see is the trolling.
Yes, trolling is distinguished from what nyan called “troll-bait” by, for most part, duration. Trolls don’t stop picking fights and seem to thrive on the conflict they provoke. If nyan tried to claim that AspiringKnitter was a troll in general—and fail to update on the evidence from after this comment—he would most certainly be wrong.
Conclusion: You fail the principle of charity forever.
He wasn’t very charitable in his comment, I certainly would have phrased criticism differently (and directed most of it at those encouraging damsel in distress crap.) But for your part you haven’t failed the principle of charity—you have failed to parse language correctly and respond to the meaning contained therein.
You’re a jerk. I hope you run out of milk next time you want to eat cereal.
You’re a jerk. I hope you run out of milk next time you want to eat cereal.
This is not ok.
The cereal thing is comically mild. The impulse to wish bad things on others is a pretty strong one and I think it’s moderated by having an outlet to acknowledge that it’s silly in this or maybe some other way—I’d rather people publicly wish me to run out of milk than privately wish me dead.
The cereal thing is comically mild. The impulse to wish bad things on others is a pretty strong one and I think it’s moderated by having an outlet to acknowledge that it’s silly in this or maybe some other way
Calling nyan a jerk in that context wasn’t ok with me and nor was any joke about wanting harm to come upon him. It was unjustified and inappropriate.
I’d rather people publicly wish me to run out of milk than privately wish me dead.
I don’t much care what MixedNuts wants to happen to nyan. The quoted combination of words constitutes a status transaction of a kind I would see discouraged. Particularly given that we don’t allow reciprocal personal banter of the kind this sort insult demands. If, for example, nyan responded with a pun on a keyword and a reference to Mixed’s sister we wouldn’t allow it. When insults cannot be returned in kind the buck stops with the first personal insult. That is, Mixed’s.
[emphasis mine]. You assume that nyan is male. Where did “he” say that? nyan explicitly claims to be a “genderless internet being” in the introductions thread.
Last LW survey came out with 95% male, IIRC. 95% sure of something is quite strong. nyan called Aspiring_Knitter a troll on much less solid evidence. Also, you come from the unfortunate position of not having workable genderless pronouns.
[emphasis mine]. You assume that nyan is male. Where did “he” say that? nyan explicitly claims to be a “genderless internet being” in the introductions thread.
That’s fair. I used male because you sounded more like a male—and still do. If you are a genderless internet being then I will henceforth refer to you as an ‘it’. If you were a genderless human I would use the letter ‘v’ followed by whatever letters seem to fit the context.
I’d rather people publicly wish me to run out of milk than privately wish me dead.
Well, who knows what MixedNuts’ wishes? Wishing wedrifid runs out of milk doesn’t exclude this latter possibility.
I’m also reminded, of all the silly things, (the overwhelmingly irrational) Simone Weil:
If someone does me an injury I must desire that this injury shall not degrade me. I must desire this out of love for him who inflicts it, in order that he may not really have done evil.
Delicious controversy. Yum. I might have a lulz-relapse and become a troll.
So… voicing disagreement boldly is trolling, voicing it nervously is trolling and trying to prevent being called out. Signalling distance from the group is trolling and accusations of hive mind, signalling group membership is trolling and going “Seriously, I’m one of you guys”. Joking about the image a group idea’s have, in the same way the group itself does, is straw-manning and caricature, seriously worrying about those ideas is damsel-in-distress crap.
Burn the witch!
Disagreement is not trolling. Neither is nervous disagreement. The hivemind thing had nothing to do with status signaling, it was about the readers insecurity. The group membership/cultural knowledge signaling thing is almost always used as a delivery vector for a ignoble payload.
They didn’t look like jokes or uncertainty to me. I am suddenly gripped by a mortal fear that I may not have a sense of humor. The damsel in distress thing was unconnected to the ideas thing.
TL;DR: what wedrifid said.
Okay, so I see the bits that are protection against being called a troll. What I don’t see is the trolling. Is it “I’m a Christian”? If you think all Christians should pretend to be atheists… well, 500 responses disagree with you. Is it what you call straw men? I read those as jokes about what we look like to outsiders, but even if they’re sincere, they’re surrounded with so much display of uncertainty that “No, that’s not what we think.” should end it then and there. And if AspiringKnitter where a troll, why would she stop trolling and write good posts right after that?
Again, they still don’t look like jokes. If everyone else decides they were jokes, I will upmod my belief that I am a humorless internet srs-taker. EDIT: oh I forgot to address the AS is not troll claim. It has been observed, in the long history of the internet, that sometimes a person skilled in the trolling arts will post a masterfully crafted troll-bait, and then decide to forsake their lulzy crusade for unknown reasons. /EDIT
I hope you run out of milk next time you want to eat cereal.
Joke is on you. nyan_sandwich″s human alter-ego doesn’t eat cereal.
nyan_sandwich may have been stricken with a minor case of confirmation bias when they made that assessment, but I think it still stands.
That’s some interesting reasoning. I’ve met people before who avoided leaving an evaporatively cooling group because they recognized the process and didn’t want to contribute to it, but you might be the first person I’ve encountered who joined a group to counteract it (or to stave it off before it begins, given that LW seems to be both growing and to some extent diversifying right now). Usually people just write groups like that off. Aside from the odd troll or ideologue that claims similar motivations but is really just looking for a fight, at least—but that doesn’t seem to fit what you’ve written here.
Anyway. I’m not going to pretend that you aren’t going to find some hostility towards Abrahamic religion here, nor that you won’t be able to find any arguably problematic (albeit mostly unconsciously so) attitudes regarding sex and/or gender. Act as your conscience dictates should you find either one intolerable. Speaking for myself, though, I take the Common Interest of Many Causes concept seriously: better epistemology is good for everyone, not just for transhumanists of a certain bent. Your belief structure might differ somewhat from the tribal average around here, but the actual goal of this tribe is to make better thinkers, and I don’t think anyone’s going to want to exclude you from that as long as you approach it in good faith.
Hi, Aspiring Knitter. I also find the Less Wrong culture and demographics quite different from my normal ones (being a female in the social sciences who’s sympathetic to religion though not a believer. Also, as it happens, a knitter.) I stuck around because I find it refreshing to be able to pick apart ideas without getting written off as too brainy or too cold, which tends to happen in the rest of my life.
Sorry for the lack of persecution—you seem to have been hoping for it.
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
Do we? Do you hate Hindus, or do you just think they’re wrong?
One thing I slightly dislike about “internet atheists” is the exclusive focus on religion as a source of all that’s wrong in the world, whereas you get very similar forms of irrationality in partisan politics or nationalism. I’m not alone in holding that view—see this for some related ideas. At best, religion can be about focusing human’s natural irrationality in areas that don’t matter (cosmology instead of economics), while facilitating morality and cooperative behavior. I understand that some Americans atheists are more hostile to religion than I am (I’m French, religion isn’t a big issue here, except for Islam), because they have to deal with religious stupidity on a daily basis.
Note that a Mormon wrote a series of posts that was relatively well received, so you may be overestimating LessWrong’s hostility to religion.
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
Technically, it’s “Christianity” that some of us don’t like very much. Many of us live in countries where people who call themselves “Christians” compose much of the population, and going around hating everyone we see won’t get us very far in life. We might wish that they weren’t Christians, but while we’re dreaming we might as well wish for a pony, too.
And, no, we don’t ban people for saying that they’re Christians. It takes a lot to get banned here.
I shouldn’t be here; you don’t want me here, not to mention I probably shouldn’t bother talking to people who only want me to hate God.
Well, so far you haven’t given us much of a reason to want you gone. Also, people who call themselves atheists usually don’t really care whether or not you “hate God” any more than we care about whether you “hate Santa Claus”.
Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
There have been several openly religious people on this site, of varying flavours. You don’t (or shouldn’t) get downvoted just for declaring your beliefs; you get downvoted for faulty logic, poor understanding and useless or irrelevant comments. As someone who stopped being religious as a result of reading this site, I’d love for more believers to come along. My impulse is to start debating you right away, but I realise that’d just be rude. If you’re interested, though, drop me a PM, because I’m still considering the possibility I might have made the wrong decision.
The evaporative cooling risk is worrying, now that you mention it… Have you actually noticed that happening here during your lurking days, or are you just pointing out that it’s a risk?
Oh, and dedicating an entire paragraph to musing about the downvotes you’ll probably get, while an excellent tactic for avoiding said downvotes, is also annoying. Please don’t do that.
As someone who stopped being religious as a result of reading this site, I’d love for more believers to come along.
Uh-oh. LOL.
My impulse is to start debating you right away, but I realise that’d just be rude.
Normally, I’m open to random debates about everything. I pride myself on it. However, I’m getting a little sick of religious debate since the last few days of participating in it. I suppose I still have to respond to a couple of people below, but I’m starting to fear a never-ending, energy-sapping, GPA-sabotaging argument where agreeing to disagree is literally not an option. It’s my own fault for showing up here, but I’m starting to realize why “agree to disagree” was ever considered by anyone at all for anything given its obvious wrongness: you just can’t do anything if you spend all your time on a never-ending argument.
The evaporative cooling risk is worrying, now that you mention it… Have you actually noticed that happening here during your lurking days, or are you just pointing out that it’s a risk?
Haven’t been lurking long enough.
Oh, and dedicating an entire paragraph to musing about the downvotes you’ll probably get, while an excellent tactic for avoiding said downvotes, is also annoying. Please don’t do that.
In the future I will not. See below. Thank you for calling me out on that.
Talk of Aumann Agreement notwithstanding, the usual rules of human social intercourse that allow “I am no longer interested in continuing this discussion” as a legitimate conversational move continue to apply on this site. If you don’t wish to discuss your religious beliefs, then don’t.
Ah, I didn’t know that. I’ve never had a debate that didn’t end with “we all agree, yay”, some outside force stopping us or everyone hating each other and hurling insults.
So, if I’m understanding you, you considered only four possible outcomes likely from your interactions with this site: everyone converts to Christianity, you get deconverted from Christianity, the interaction is forcibly stopped, or the interaction degenerates to hateful insults. Yes?
I’d be interested to know how likely you considered those options, and if your expectations about likely outcomes have changed since then.
Well, for any given conversation about religion, yes. (Obviously, I expect different things if I post a comment about HP:MoR on that thread.)
I expected the last one, since mostly no matter what I do, internet discussions on anything important have a tendency to do that. (And it’s not just when I’m participating in them!) I considered any conversions highly unlikely and didn’t really expect the interaction to be stopped.
My expectations have changed a lot. After a while I realized that hateful insults weren’t happening very much here on Less Wrong, which is awesome, and that the frequency didn’t seem to increase with the length of the discussion, unlike other parts of the internet. So I basically assumed the conversation would go on forever. Now, having been told otherwise, I realize that conversations can actually be ended by the participants without one of these things happening.
That was a failure on my part, but would have correctly predicted a lot of the things I’d experienced in the past. I just took an outside view when an inside view would have been better because it really is different this time. That failure is adequately explained by the use of the outside view heuristic, which is usually useful, and the fact that I ended up in a new situation which lacked the characteristics that caused what I observed in the past.
I think this rules out some and only some branches of Christianity, but more importantly it impels accepting behaviorist criteria for any difference in kind between “atheists” and “Christians” if we really want categories like that.
I’m starting to fear a never-ending, energy-sapping, GPA-sabotaging argument where agreeing to disagree is literally not an option.
There isn’t a strong expectation here that people should never agree to disagree—see this old discussion, or this one.
That being said, persistent disagreement is a warning sign that at least one side isn’t being perfectly rational (which covers both things like “too attached to one’s self-image as a contrarian” and like “doesn’t know how to spell out explicitly the reasons for his belief”).
I tried to look for a religious debate elsewhere in this thread but could not find any except the tangential discussion of
schizophrenia.
However, I’m getting a little sick of religious debate since the last few days of participating in it.
Then please feel free to ignore this comment. On the other hand, if you ever feel like responding then by all means do.
A lack of response to this comment should not be considered evidence that AspiringKnitter could not have brilliantly responded.
What is the primary reason you believe in God and what is the nature of this reason?
By nature of the reason, I mean something like these:
inductive inference: you believe adding a description of whatever you understand of God leads to a simpler explanation of the universe without losing any predictive power
intuitive inductive inference: you believe in god because of intuition. you also believe that there is an underlying argument using inductive inference, you just don’t know what it is
intuitive metaphysical: you believe in god because of intuition. you believe there is some other justification this intuition works
I tried to look for a religious debate elsewhere in this thread but could not find any except the tangential discussion of schizophrenia.
It’s weird, but I can’t seem to find everything on the thread from the main post no matter how many of the “show more comments” links I click. Or maybe it’s just easy to get lost.
What is the primary reason you believe in God and what is the nature of this reason?
None of the above, and this is going to end up on exactly (I do mean exactly) the same path as the last one within three posts if it continues. Not interested now, maybe some other time. Thanks. :)
Hello. I expect you won’t like me because I’m Christian and female and don’t want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should.
I don’t think you’ll be actively hated here by most posters (and even then, flamewars and trolling here are probably not what you’d expect from most other internet spaces)
it’ll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world.
I wouldn’t read polyamory as a primary shared feature of the posters here—and this is speaking as someone who’s been poly her entire adult life. Compared to most mainstream spaces, it does come up a whole lot more, and people are generally unafraid of at least discussing the ins and outs of it.
(I find it hard to imagine how you could manage real immortality in a universe with a finite lifespan, but that’s neither here nor there.)
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
You have to do a lot weirder or more malicious than that to get banned here. I frequently argue inarticulately for things that are rather unpopular here, and I’ve never once gotten the sense that I would be banned. I can think of a few things that I could do that would get me banned, but I had to go looking.
You won’t be banned, but you will probably be challenged a lot if you bring your religious beliefs into discussions because most of the people here have good reasons to reject them. Many of them will be happy to share those with you, at length, should you ask.
I probably shouldn’t bother talking to people who only want me to hate God.
The people here mostly don’t think the God you believe in is a real being that exists, and have no interest in making you hate your deity. For us it would be like making someone hate Winnie the Pooh—not the show or the books, but the person. We don’t think there’s anything there to be hated.
Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
I’m going to guess it’s because you’re curious, and you’ve identified LW as a place where people who claim to want to do some pretty big, even profound things to change the world hang out (as well as people interested in a lot of intellectual topics and skills), and on some level that appeals to you?
And I’d further guess you feel like the skew of this community’s population makes you nervous that some of them are talking about changing the world in ways that would affect everybody whether or not they’d prefer to see that change if asked straight up?
the possibility of becoming immortal polyamorous whatever and taking over the world.
I think I just found my new motto in life :-)
You guys really hate Christians, after all.
I personally am an atheist, and a fairly uncompromising one at that, but I still find this line a little offensive. I don’t hate all Christians. Many (or probably even most) Christians are perfectly wonderful people; many of them are better than myself, in fact. Now, I do believe that Christians are disastrously wrong about their core beliefs, and that the privileged position that Christianity enjoys in our society is harmful. So, I disagree with most Christians on this topic, but I don’t hate them. I can’t hate someone simply for being wrong, that just makes no sense.
That said, if you are the kind of Christian who proclaims, in all seriousness, that (for example) all gay people should be executed because they cause God to send down hurricanes—then I will find it very, very difficult not to hate you. But you don’t sound like that kind of a person.
If you can call down hurricanes, tell me and I’ll revise my beliefs to take that into account. (But then I’d just be in favor of deporting gays to North Korea or wherever else I decide I don’t like. What a waste to execute them! It could also be interesting to send you all to the Sahara, and by interesting I mean ecologically destructive and probably a bad idea not to mention expensive and needlessly cruel.) As long as you’re not actually doing that (if you are, please stop), and as long as you aren’t causing some other form of disaster, I can’t think of a good reason why I should be advocating your execution.
Sadly, I myself do not possess the requisite sexual orientation, otherwise I’d be calling down hurricanes all over the place. And meteorites. And angry frogs ! Mwa ha ha !
Bugmaster, I call down hurricanes everyday. It never gets boring. Meteorites are a little harder, but I do those on occasion. They aren’t quite as fun.
But the angry frogs?
The angry frogs?
Those don’t leave a shattered wasteland behind, so you can just terrorize people over and over again with those. Just wonderful.
Note: All of the above is complete bull-honkey. I want this to be absolutely clear. 100%, fertilizer-grade, bull-honkey.
EY has read With Folded Hands and mentioned it in his CEV writeup as one more dystopia to be averted. This task isn’t getting much attention now because unfriendly AI seems to be more probable and more dangerous than almost-friendly AI. Of course we would welcome any research on preventing almost-friendly AI :-)
Either. The main reason creating almost-Friendly AI isn’t a concern is that it’s believed to be practically as hard as creating Friendly AI. Someone who tries to create a Friendly AI and fails creates an Unfriendly AI or no AI at all. And almost-Friendly might be enough to keep us from being hit by meteors and such.
In the real world if I believe that “anyone who isn’t my enemy is my friend” and you believe that “anyone who isn’t my friend is my enemy”, we believe different things. (And we’re both wrong: the truth is some people are neither my friends nor my enemies.) I assume that’s what xxd is getting at here. I think it would be more precise for xxd to say “I don’t believe that NOT(FAI) is a bad thing that we should be working to avoid. I believe that NOT(UFAI) is a good thing that we should be working to achieve.”
In this xxd does in fact disagree with the articulated LW consensus, which is that the design space of human-created AI is so dangerous that if an AI isn’t provably an FAI, we ought not even turn it on… that any AI that isn’t Friendly constitutes an existential risk.
Xxd may well be wrong, but xxd is not saying something incoherent here.
In the real world if I believe that “anyone who isn’t my enemy is my friend” and you believe that “anyone who isn’t my friend is my enemy”, we believe different things.
Can you explain what those things are? I can’t see the distinction. The first follows necessarily from the second, and vice-versa.
I’ve known Sam since we were kids together, we enjoy each others’ company and act in one another’s interests. I’ve known Doug since we were kids together, we can’t stand one another and act against one another’s interests. I’ve never met Ethel in my life and know nothing about her; she lives on the other side of the planet and has never heard of me.
It seems fair to say that Sam is my friend, and Doug is my enemy. But what about Ethel?
If I believe “anyone who isn’t my enemy is my friend,” then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another’s interests? No, we do not. Thus we aren’t enemies… and it follows from my belief that Ethel is my friend.
If I believe “anyone who isn’t my friend is my enemy,” then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another’s interests? No, we do not. Thus we aren’t friends… and it follows from my belief that Ethel is my enemy.
I think it more correct to say that Ethel is neither my friend nor my enemy. Thus, I consider Ethel an example of someone who isn’t my friend, and isn’t my enemy. Thus I think both of those beliefs are false. But even if I’m wrong, it seems clear that they are different beliefs, since they make different predictions about Ethel.
If I believe “anyone who isn’t my enemy is my friend,” then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another’s interests? No, we do not. Thus we aren’t enemies… and it follows from my belief that Ethel is my friend.
If I believe “anyone who isn’t my friend is my enemy,” then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another’s interests? No, we do not. Thus we aren’t friends… and it follows from my belief that Ethel is my enemy.
Thanks—that’s interesting.
It seems to me that this analysis only makes sense if you actually have the non-excluded middle of “neither my friend nor my enemy”. Once you’ve accepted that the world is neatly carved up into “friends” and “enemies”, it seems you’d say “I don’t know whether Ethel is my friend or my enemy”—I don’t see why the person in the first case doesn’t just as well evaluate Ethel for friendhood, and thus conclude she isn’t an enemy. Note that one who believes “anyone who isn’t my enemy is my friend” also should thus believe “anyone who isn’t my friend is my enemy” as a (logically equivalent) corollary.
Am I missing something here about the way people talk / reason? I can’t really imagine thinking that way.
Edit: In case it wasn’t clear enough that they’re logically equivalent:
Yes, I agree that if everyone in the world is either my friend or my enemy, then “anyone who isn’t my enemy is my friend” is equivalent to “anyone who isn’t my friend is my enemy.”
But there do, in fact, exist people who are neither my friend nor my enemy.
If “everyone who is not my friend is my enemy”, then there does not exist anyone who is neither my friend nor my enemy. You can therefore say that the statement is wrong, but the statements are equivalent without any extra assumptions.
ISTM that the two statements are equivalent denotationally (they both mean “each person is either my friend or my enemy”) but not connotationally (the first suggests that most people are my friends, the latter suggests that most people are my enemies).
In other words, there are things that are friends. There are things that are enemies. It takes a separate assertion that those are the only two categories (as opposed to believing something like “some people are indifferent to me”).
In relation to AI, there is malicious AI (the Straumli Perversion), indifferent AI (Accelerando AI), and FAI. When EY says uFAI, he means both malicious and indifferent. But it is a distinct insight to say that indifferent AI are practically as dangerous as malicious AI. For example, it is not obvious that an AI whose only goal is to leave the Milky Way galaxy (and is capable of trying without directly harming humanity) is too dangerous to turn on. Leaving aside the motivation for creating such an entity, I certainly would agree with EY that such an entity has a substantial chance of being an existential risk to humanity.
This seems mostly like a terminological dispute. But I think AI that doesn’t care about humanity (i.e the various AI in Accelerando) are best labeled unfriendly even though they are not trying to end humanity or kill any particular human.
I can’t imagine a situation in which the AGI is sort-of kind to us—not killing good people, letting us keep this solar system—but which also does some unfriendly things, like killing bad people or taking over the rest of the galaxy (both pretty terrible things in themselves, even if they’re not complete failures), unless that’s what the AI’s creator wanted—i.e. the creator solved FAI but managed to, without upsetting the whole thing, include in the AI’s utility function terms for killing bad people and caring about something completely alien outside the solar system. They’re not outcomes that you can cause by accident—and if you can do that, then you can also solve full FAI, without killing bad people or tiling the rest of the galaxy.
I guess what I’m saying is that we’ve gotten involved in a compression fallacy and are saying that Friendly AI = AI that helps out humanity (or is kind to humanity—insert favorite “helps” derivative here).
Here’s an example: I’m “sort of friendly” in that I don’t actively go around killing people, but neither will I go around actively helping you unless you want to trade with me. Does that make me unfriendly? I say no it doesn’t.
Well, I don’t suppose anyone feels the need to draw a bright-line distinction between FAI and uFAI—the AI is more friendly the more its utility function coincides with your own. But in practice it doesn’t seem like any AI is going to fall into the gap between “definitely unfriendly” and “completely friendly”—to create such a thing would be a more fiddly and difficult engineering problem than just creating FAI. If the AI doesn’t care about humans in the way that we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about.
EDIT: Actually, thinking about it, I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate. I’m not sure how fast this goes wrong or in what way, but it doesn’t strike me as a good idea.
I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate. I’m not sure how fast this goes wrong or in what way, but it doesn’t strike me as a good idea.
“I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate”
i.e. the device has to judge the usefulness by some metric and then decide to execute someone’s volition or not.
That’s exactly what my issue is with trying to define a utility function for the AI. You can’t. And since some people will have their utility function denied by the AI then who is to choose who get’s theirs executed?
I’d prefer to shoot for a NOT(UFAI) and then trade with it.
Here’s a thought experiment:
Is a cure for cancer maximizing everyone’s utility function?
Yes on average we all win.
BUT
Companies who are currently creating drugs to treat the symptoms of cancer and their employees would be out of business.
Which utility function should be executed? Creating better cancer drugs to treat the symptoms and then allowing the company to sell them, or put the companies out of business and cure cancer.
Well, that’s an easy question: if you’ve worked sixteen hour days for the last forty years and you’re just six months away from curing cancer completely and you know you’re going to get the Nobel and be fabulously wealthy etc. etc. and an alien shows up and offers you a cure for cancer on a plate, you take it, because a lot of people will die in six months. This isn’t even different to how the world currently is—if I invented a cure for cancer it would be detrimental to all those others who were trying to (and who only cared about getting there first) - what difference does it make if an FAI helps me? I mean, if someone really wants to murder me but I don’t want them to and they are stopped by the police, that’s clearly an example of the government taking the side of my utility function over the murderer’s. But so what? The murderer was in the wrong.
Anyway, have you read Eliezer’s paper on CEV? I’m not sure that I agree with him, but he does deal with the problem you bring up.
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
But I dispute the position that “if an AI doesn’t care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about”.
Consider:
A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off.
For us that’s an unfriendly AI.
One, however that doesn’t kill any of us but basically leaves us alone is defined by those of you who define “friendly AI” to be “kind to us”/”doing what we all want”/”maximizing our utility functions” etc is not unfriendly because by definition it doesn’t kill all of us.
Unless unfriendly also includes “won’t kill all of us but ignores us” et cetera.
Am I for example unfriendly to you if I spent my next month’s paycheck on paperclips but did you no harm?
Well, no. If it ignores us I probably wouldn’t call it “unfriendly”—but I don’t really mind if someone else does. It’s certainly not FAI. But an AI does need to have some utility function, otherwise it does nothing (and isn’t, in truth, intelligent at all), and will only ignore humanity if it’s explicitly programmed to. This ought to be as difficult an engineering problem as FAI—hence why I said it “almost certainly takes us apart”. You can’t get there by failing at FAI, except by being extremely lucky, and why would you want to go there on purpose?
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
Yes, it would be a really bad idea to have a superintelligence optimise the world for just one person’s utility function.
“But an AI does need to have some utility function”
What if the “optimization of the utility function” is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?
Is it sentient if it sits in a corner and thinks to itself, running simulations but won’t talk to you unless you offer it a trade e.g. of some paperclips?
Is it possible that we’re conflating “friendly” with “useful but NOT unfriendly” and we’re struggling with defining what “useful” means?
If it likes sitting in a corner and thinking to itself, and doesn’t care about anything else, it is very likely to turn everything around it (including us) into computronium so that it can think to itself better.
If you put a threshold on it to prevent it from doing stuff like that, that’s a little better, but not much. If it has a utility function that says “Think to yourself about stuff, but do not mess up the lives of humans in doing so”, then what you have now is an AI that is motivated to find loopholes in (the implementation of) that second clause, because anything that can get an increased fulfilment of the first clause will give it a higher utility score overall.
You can get more and more precise than that and cover more known failure modes with their own individual rules, but if it’s very intelligent or powerful it’s tough to predict what terrible nasty stuff might still be in the intersection of all the limiting conditions we create. Hidden complexity of wishes and all that jazz.
Not everyone agrees with Eliezer on everything; this is usually not that explicit, but consider e.g. the number of people talking about relationships vs. the number of people talking about cryonics or FAI—LW doesn’t act, collectively, as if it really believes Eliezer is right. It does assume that there is no God/god/supernatural, though.
(Also, where does this idea of atheists hating God come from? Most atheists have better things to do than hang on /r/atheism!)
I got the idea from various posts where people have said they don’t even like the Christian God if he’s real (didn’t someone say he was like Azathoth?) and consider him some kind of monster.
I can see I totally got you guys wrong. Sorry to have underestimated your niceness.
For my own part, I think you’re treating “being nice” and “liking the Christian God” and “hating Christians” and “wanting other people to hate God” and “only wanting other people to hate God” and “forcibly exterminating all morality” and various other things as much more tightly integrated concepts than they actually are, and it’s interfering with your predictions.
So I suggest separating those concepts more firmly in your own mind.
To be fair, I’m sure a bunch of people here disapprove of some actions by the Christian God in the abstract (mostly Old Testament stuff, probably, and the Problem of Evil). But yeah, for the most part LWers are pretty nice, if a little idiosyncratic!
Azathoth (the “blind idiot god”) is the local metaphor for evolution—a pointless, monomaniacal force with vast powers but no conscious goal-seeking ability and thus a tendency to cause weird side-effects (such as human culture).
Not everyone agrees with Eliezer on everything; this is usually not that explicit, but consider e.g. the number of people talking about relationships vs. the number of people talking about cryonics or FAI—LW doesn’t act, collectively, as if it really believes Eliezer is right
Well, I personally am one of those people who thinks that cryonics is currently not worth worrying about, and that the Singularity is unlikely to happen anytime soon (in astronomical terms). So, there exists at least one outlier in the Less Wrong hive mind...
Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn’t a very hive-mindey community, unless you count atheism.
(The singularity, yes, you’re very much in the minority with the most skeptical quartile expecting it in 2150)
Regarding cryonics, you’re right and I was wrong, so thanks !
But in the interest of pedantry I should point out that among those 96% who did not sign up, many did not sign up simply due to a lack of funds, and not because of any misgivings they have about the process.
Also, where does this idea of atheists hating God come from?
If one reads the Bible as one would read any other fiction book, then IMO it’d be pretty hard to conclude that this “God” character is anything other than the villain of the story. This doesn’t mean that atheists “hate God”, no more than anyone could be said to “hate Voldemort”, of course—both of them are just evil fictional characters, no more and no less.
Christians, on the other hand, believe that a God of some sort actually does exist, and when they hear atheists talking about the character of “God” in fiction, they assume that atheists are in fact talking about the real (from the Christians’ point of view) God. Hence the confusion.
In my own experience, one hears the claim more often as “atheists hate religion” rather than “atheists hate god”. The likelihood of hearing it seems to correlate with how intolerant a brand of religiosity one is dealing with (I can’t think of an easy way to test that intuition empirically at the the moment), so I tend to attribute it to projection.
Sweaters, hats, scarves, headbands, purses, everything knittable. (Okay, I was wrong below, that was actually the second-easiest post to answer.) Do you like knitting too?
Welcome! And congratulations for creating what’s probably the longest and most interesting introduction thread of all time (I haven’t read all the introductions threads, though).
I’ve read all your posts here. I now have to update my belief about rationality among christians: so long, the most “rational” I’d found turned out to be nothing beyond a repetitive expert in rationalization. Most others are sometimes relatively rational in most aspects of life, but choose to ignore the hard questions about the religion they profess (my own parents fall in this category). You seem to have clear thought, and will to rethink your ideas. I hope you stay around.
On a side note, as others already stated below, I think you misunderstand what Eliezer wants to do with FAI. I agree with what MixedNuts said here, though I would also recommend reading The Hidden Complexity of Wishes, if you haven’t yet. Eliezer is more sane than it seems at first, in my opinion.
PS: How are you feeling about the reception so far?
EDIT: Clarifying: I agree with what MixedNuts said in the third and fourth paragraphs.
I think I’ve gotten such a nice reception that I’ve also updated in the direction of “most atheists aren’t cruel or hateful in everyday life” and “LessWrong believes in its own concern for other people because most members are nice”.
The wish on top of that page is actually very problematic…
The ordinary standard of courtesy here is pretty high, and I don’t think you get upvotes for meeting it. You can get upvotes for being nice (assuming that you also include content) if it’s a fraught issue.
I’ve also updated in the direction of “most atheists aren’t cruel or hateful in everyday life”
I’m not sure atheist LW users would be a good sample of “most atheists”. I’d expect there to be a sizeable fraction of people who are atheists merely as a form of contrarianism.
I’d expect there to be a sizeable fraction of people who are atheists merely as a form of contrarianism.
I don’t think that’s the case. I do think there are a good many people who are naturally contrarian, and use their atheism as a platform. There are also people who become atheists after having been mistreated in a religion, and they’re angry.
I’m willing to bet a modest amount that going from religious to atheist has little or no effect on how much time a person spends on arguing about religion, especially in the short run.
Well, IME in Italy people from the former Kingdom of the Two Sicilies are usually much more religious than people from the former Papal States and the latter are much more blasphemous, and I have plenty of reasons to believe it’s not a coincidence.
The wish on top of that page is actually very problematic...
Yes, that was a part of the point of the article—people try to fully specify what they want, it gets this complex, and it’s still missing things; meanwhile, people understand what someone means when they say “I wish I was immortal.”
Right—there’s no misunderstanding, because the complexity is hidden by expectations and all sorts of shared stuff that isn’t likely to be there when talking to a genie of the “sufficiently sophisticated AI” variety, unless you are very careful about making sure that it is. Hence, the wish has hidden complexity—the point (and title) of the article.
Upvoted for linking The Hidden Complexity of Wishes. If Eliezer was actually advocating adjusting people’s sex drives, rather than speculating as to the form a compromise might take, he wasn’t following his own advice.
Welcome to LessWrong. Our goal is to learn how to achieve our goals better. One method is to observe the world and update our beliefs based on what we see (You’d think this would be an obvious thing to do, but history shows that it isn’t so). Another method we use is to notice the ways that humans tend to fail at thinking (i.e. have cognitive bias).
Anyway, I hope you find those ideas useful. Like many communities, we are a diverse bunch. Each of our ultimate goals likely differs, but we recognize that the world is far from how any of us want it to be, and that what each of us wants is in roughly the same direction from here. In short, the extent to which we are an insular community is a failure of the community, because we’d all like to raise the sanity line. Thus, welcome to LW. Help us be better.
I don’t think much people here hate Christians. At least I don’t. I’ll just speak for myself (even if I think my view is quite shared here) : I have a harsh view on religions themselves, believing they are mind-killing, barren and dangerous (just open an history book), but that doesn’t mean I hate the people who do believe (as long as they don’t hate us atheists). I’ve christian friends, and I don’t like them less because of their religion. I’m a bit trying to “open their mind” because I believe that knowing and accepting the truth makes you stronger, but I don’t push too much the issue either.
For the “that acts more like Eliezer thinks it should” part, well, the Coherent Extrapolated Volition of Eliezer is supposed to be coherent over the whole of humanity, not over himself. Eliezer is not trying to make an AI that’ll turn the world into his own paradise, but that’ll turn it into something better according to the common wishes of all (or almost all) of humanity. He may fail at it, but if it does, he’s more likely to tile the world with smiley faces then to turn it into its own paradise ;)
… I’d rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don’t agree with Eliezer Yudkowsky.
Upvote for courage, and I’d give a few more if I could. (Though you might consider rereading some of EY’s CEV posts, because I don’t think you’ve accurately summarized his intentions.)
You guys really hate Christians, after all.
I don’t hate Christians. I was a very serious one for most of my life. Practically everyone I know and care about IRL is Christian.
I don’t think LW deserves all the credit for my deconversion, but it definitely hastened the event.
I’m Christian and female and don’t want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should.
Only one of those is really a reason for me to be nervous, and that’s because Christianity has done some pretty shitty things to my people. But that doesn’t mean we have nothing in common! I don’t want to act the way EY thinks I should, either. (At least, not merely because it’s him that wants it.)
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
If you look at the survey, notice you’re not alone. A minority, perhaps, but not entirely alone. I hope you hang around.
I’m not claiming any such representation or authority. They’re my people only in the sense that all of us happen to be guys who like guys; they’re the group of people I belong to. I’m not even claiming martyrdom, because (not many) of these shitty things have explicitly happened to me. I’m only stating my own (and no one else’s) prior for how interactions between self-identified Christians and gay people tend to turn out.
The point has been missed. Deep breath, paper-machine.
Nearly any viewpoint is capable of and has done cruel things to others. No reason to unnecessarilly highlight this fact and dramatize the Party of Suffering. This was an intro thread by a newcomer—not a reason to point to you and “your” people. They can speak for themselves.
To the extent that you’re saying that the whole topic of Christian/queer relations was inappropriate for an intro thread, I would prefer you’d just said that. I might even agree with you, though I didn’t find paper-machine’s initial comment especially problematic.
To the extent that you’re saying that paper-machine should not treat the prior poor treatment of members of a group they belong to, by members of a group Y belongs to, as evidence of their likely poor treatment by Y, I simply disagree. It may not be especially strong evidence, but it’s also far from trivial.
And all the stuff about martyrdom and Parties of Suffering and who gets to say what for whom seems like a complete distraction.
Why berate him for doing just that, then? He’s expressing his prior: members of a reference class he belongs to are often singled out for mistreatment by members of a reference class that his interlocutor claims membership with. He does not appear to believe himself Ambassador of All The Gay Men, based on what he’s actually saying, nor to treat that class-membership as some kind of ontological primitive.
Though it’s made more impressive when you realize that the comment you respond to, and its grandparent, are the user’s only two comments, and they average 30 karma each. That’s a beautiful piece of market timing!
Wow, thanks! I feel less nervous/unwelcome already!
Let me just apologize on behalf of all of us for whichever of the stains on our honor you’re referring to. It wasn’t right. (Which one am I saying wasn’t right?)
Yay for not acting like EY wants, I guess. No offense or anything, EY, but you’ve proposed modifications you want to make to people that I don’t want made to me already...
(I don’t know what I said to deserve an upvote… uh, thanks.)
I’m curious which modifications EY has proposed (specifically) that you don’t want made, unless it’s just generically the suggestion that people could be improved in any ways whatsoever and your preference is to not have any modifications made to yourself (in a “be true to yourself” manner, perhaps?) that you didn’t “choose”.
If you could be convinced that a given change to “who you are” would necessarily be an improvement (by your own standards, not externally imposed standards, since you sound very averse to such restrictions) such as “being able to think faster” or “having taste preferences for foods which are most healthy for you” (to use very primitive off-the-cuff examples), and then given the means to effect these changes on yourself, would you choose to do so, or would you be averse simply on the grounds of “then I wouldn’t be ‘me’ anymore” or something similar?
Being able to think faster is something I try for already, with the means available to me. (Nutrition, sleep, mental exercise, I’ve even recently started trying to get physical exercise.) I actually already prefer healthy food (it was a really SIMPLE hack: cut out junk food, or phase it out gradually if you can’t take the plunge all at once, and wait until your taste buds (probably actually some brain center) start reacting like they would have in the ancestral environment, which is actually by craving healthy food), so the only further modification to be done is to my environment (availability of the right kinds of stuff). So obviously, those in particular I do want.
However, I also believe that here lies the road to ableism. EY has already espoused a significant amount. For instance, his post about how unfair IQ is misses out on the great contributions made to the world by people with very low IQs. There’s someone with an IQ of, I think she said, 86 or so, who is wiser than I am (let’s just say I probably rival EY for IQ score). IQ is valid only for a small part of the population and full-scale IQ is almost worthless except for letting some people feel superior to others. I’ve spent a lot of time thinking about and exposed to people’s writings about disability and how there are abled people who seek to cure people who weren’t actually suffering and appreciated their uniqueness. Understanding and respect for the diversity of skills in the world is more important than making everyone exactly like anyone else.
The above said, that doesn’t mean I’m opposed in principle to eliminating problems with disability (nor is almost anyone who speaks out against forced “cure”). Just to think of examples, I’m glad I’m better at interacting with people than I used to be and wish to be better at math (but NOT at the expense of my other abilities). Others, with other disabilities, have espoused wishes for other things (two people that I can think of want an end to their chronic pain without feeling that other aspects of their issues are bad things or need fixed). I worry about EY taking over the world with his robots and not remembering the work of Erving Goffman and a guy whose book is someplace where I can’t glance at the spine to see his name. He may fall into any number of potential traps. He could impose modification on those he deems not intelligent enough to understand, even though they are (one person who strongly shaped my views on this topic has made a video about it called In My Language). I also worry that he could create nursing homes without fully understanding institutionalization and learned helplessness and why it costs less in the community anyway. And once he’s made it a ways down that road, he might be better than most at admitting mistakes, but it’s hard to acknowledge that you’ve caused that much suffering. (We see it all the time in parents who don’t want to admit what harm they’ve caused disabled children by misunderstanding.) And by looking only at the optimal typical person, he may miss out on the unique gifts of other configurations. (I am not in principle opposed to people having all the strengths and none of the weaknesses of multiple types. I’m becoming a bit like that in some areas on a smaller scale, but not fully, and I don’t think that in practice it will work for most people or work fully.)
Regarding what EY has proposed that I don’t want, on the catperson post (in a comment), EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn’t appeal to me at all. (Sorry, but I don’t WANT to want more sex. You probably won’t agree with this argument, but Jesus advocated celibacy for large swaths of the population, and should I be part of one of those, I’d rather it not be any harder. Should I NOT be in one of those swaths, it’s still important that I not be too distracted satisfying those desires, since I’ll have far more important things to do with my life.) But in a cooperative endeavor like that, who’s going to listen to me explaining I don’t want to change in the way that would most benefit them?
And that’s what I can think of off the top of my head.
By the middle of the second paragraph I was thinking “Whoa, is everyone an Amanda Baggs fan around here?”. Hole in one! I win so many Bayes-points, go me.
I and a bunch of LWers I’ve talked to about it basically already agree with you on ableism, and a large fraction seems to apply usual liberal instincts to the issue (so, no forced cures for people who can point to “No thanks” on a picture board). There are extremely interesting and pretty fireworks that go off when you look at the social model disability from a transhumanist perspective and I want to round up Alicorn and Anne Corwin and you and a bunch of other people to look at them closely. It doesn’t look like curing everyone (you don’t want a perfectly optimized life, you want a world with variety, you want change over time), and it doesn’t look like current (dis)abilities (what does “blind” mean if most people can see radio waves?), and it doesn’t look like current models of disability (if everyone is super different and the world is set up for that and everything is cheap there’s no such thing as accommodations), and it doesn’t look like the current structures around disability (if society and personal identity and memory look nothing like they started with “culture” doesn’t mean the same thing and that applies to Deaf culture) and it’s complicated and pretty and probably already in some Egan novel.
But, to address your central point directly: You are completely and utterly mistaken about what Eliezer Yudkowsky wants to do. He’s certainly not going to tell a superintelligence anything as direct and complicated as “Make this person smarter”, or even “Give me a banana”. Seriously, nursing homes?
If tech had happened to be easier, we might have gotten a superintelligence in the 16th century in Europe. Surely we wouldn’t have told it to care about the welfare of black people. We need to build something that would have done the right thing even if we had built it in the 16th century. The very rough outline for that is to tell it “Here are some people. Figure out what they would want if they knew better, and do that.”. So in the 16th century, it would have been presented with abled white men; figured out that if they were better informed and smarter and less biased and so on, these men would like to be equal to black women; and thus included black women in its next turn of figuring out what people want. Something as robust as this needs to be can’t miss an issue that’s currently known to exist and be worthy of debate!
And for the celibacy thing: that’s a bit besides the point, but obviously if you want to avoid sex for reasons other than low libido, increasing your libido obviously won’t fix the mismatch.
The same way we do, but faster? Like, if you start out thinking that scandalous-and-gross-sex-practice is bad, you can consider arguments like “disgust is easily culturally trained so it’s a poor measure of morality”, and talk to people so you form an idea of what it’s like to want and do it as a subjective experience (what positive emotions are involved, for example), and do research so you can answer queries like “If we had a brain scanner that could detect brainwashing manipulation, what would it say about people who want that?”.
So the superintelligence builds a model of you and feeds it lots of arguments and memory tape from others and other kinds of information. And then we run into trouble because maybe you end up wanting different things depending on the order it feeds you it, or it tells you to many facts about Deep Ones and it breaks your brain.
IQ is valid only for a small part of the population and full-scale IQ is almost worthless
This directly contradicts the mainstream research on IQ: see for instance this or this. If you have cites to the contrary, I’d be curious to read them.
That said, glad to see someone else who’s found In My Language—I ran across it many years ago and thought it beautiful and touching.
Yes, you’re right. That was a blatant example of availability bias—the tiny subset of the population for which IQ is not valid makes up a disproportionately large part of my circle. And I consider full-scale IQ worthless for people with large IQ gaps, such as people with learning disabilities, and I don’t think it conveys any new information over and above subtest scores in other people. Thank you for reminding me again how very odd I and my friends are.
But I also refer here to understanding, for instance, morality or ways to hack life, and having learned one of the most valuable lessons I ever learned from someone I’m pretty sure is retarded (not Amanda Baggs; it’s a young man I know), I know for a fact that some important things aren’t always proportional to IQ. In fact, specifically, I want to say I learned to be better by emulating him, and not just from the interaction, lest you assume it’s something I figured out that he didn’t already know.
I don’t have any studies to cite; just personal experience with some very abnormal people. (Including myself, I want to point out. I think I’m one of those people for whom IQ subtests are useful—in specific, limited ways—but for whom full-scale IQ means nothing because of the great variance between subtest scores.)
glad to see someone else who’s found In My Language
Her points on disability may still be valid, but it looks like the whole Amanda Baggs autism thing was a media stunt. At age 14, she was a fluent speaker with an active social life.
The page you link is kind of messy, but I read most of it. Simon’s Rock is real (I went there) and none of the details presented about it were incorrect (e.g. they got the name of the girls’ dorm right), but I’ve now poked around the rest of “Autism Fraud” and am disinclined to trust it as a source (the blogger sounds like a crank who believes that vaccines cause autism, and that chelation cures it, and he says all of this in a combative, nasty way). Do you have any other, more neutral sources about Amanda Baggs’s allegedly autism-free childhood? I’m sort of tempted to call up my school and ask if she’s even a fellow alumna.
But in a cooperative endeavor like that, who’s going to listen to me explaining I don’t want to change in the way that would most benefit them?
Those of us who endorse respecting individual choices when we can afford to, because we prefer that our individual choices be respected when we can afford it.
I am not in principle opposed to people having all the strengths and none of the weaknesses of multiple types [..] I don’t think that in practice it will work for most people
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
More broadly: I mostly consider all of this “what would EY do” stuff a distraction; the question that interests me is what I ought to want done and why I ought to want it done, not who or what does it. If large-scale celibacy is a good idea, I want to understand why it’s a good idea. Being told that some authority figure (any authority figure) advocated it doesn’t achieve that. Similarly, if it’s a bad idea, I want to understand why it’s a bad idea.
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
Whatever-it-is-that-distinguishes-the-people-it-works-for seems to be inherent in the skills in question (that is, the configuration that brings about a certain ability also necessarily brings about a weakness in another area), so I don’t think that’s possible. If it were, I can only imagine it taking the form of people being able to shift configuration very rapidly into whatever works best for the situation, and in some cases, I find that very implausible. If I’m wrong, sure, why not? If it’s possible, it’s only the logical extension of teaching people to use their strengths and shore up their weaknesses. This being an inherent impossibility (or so I think; I could be wrong), it doesn’t so much matter whether I’m opposed to it or not, but yeah, it’s fine with me.
You make a good point, but I expect that assuming that someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky, so whether he would abuse that power is more important than whether my next-door neighbors would if they could or even what I would do, and so what EY wants is at least worth considering, because the failure mode if he does something bad is way too catastrophic.
[if] someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky
What makes you think that?
For example, do you think he’s the only person working on building AI powerful enough to change the world? Or that, of the people working on it, he’s the only one competent enough to succeed? Or that, of the people who can succeed, he’s the only one who would “use” the resulting AI to rule the world and modify people? Or something else?
He’s the only person I know of who wants to build an AI that will take over the world and do what he wants. He’s also smart enough to have a chance, which is disturbing.
Have you read his paper on CEV? To the best of my knowledge, that’s the clearest place he’s laid out what he wants an AGI to do, and I wouldn’t really label it “take over the world and do what [Eliezer Yudkowsky] wants” except for broad use of those terms to the point of dropping their typical connotations.
Don’t worry. We are in good hands. Eliezer understands the dillemas involved and will ensure that we can avoid non-friendly AI. The SI are dedicated to Friendly AI and the completion of their goal.
I can virtually guarantee you that he’s not the only one who wants to build such an AI. Google, IBM, and the heads of major three-letter government agencies all come to mind as the kind of players who would want to implement their own pet genie, and are actively working toward that goal. That said, it’s possible that EY is the only one who has a chance of success… I personally wouldn’t give him, or any other human, that much credit, but I do acknowledge the possibility.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive.
It seems likely that it would find some of the knowledge humanity has built up over the
millenia useful, regardless of what specific goals it had. In that sense, I think that even if
a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in
the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be
much comfort, but how comforting or not it is is a matter of personal taste.
As I mentioned here,
I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive
self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a
variety of personal risks.
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that hasFOOMed that really represents the threat.
EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn’t appeal to me at all.
Yeah, this is Eliezer inferring too much from the most-accessible information about sex drive from members of his tribe, so to speak—it’s not so very long ago in the West that female sex drive was perceived as insatiable and vast, with women being nearly impossible for any one man to please in bed; there are still plenty of cultures where that’s the case. But he’s heard an awful lot of stories couched in evolutionary language about why a cultural norm in his society that is broadcast all over the place in media and entertainment reflects the evolutionary history of humanity.
He’s confused about human nature. If Eliezer builds a properly-rational AI by his own definitions to resolve the difficulty, and it met all his other stated criteria for FAI, it would tell him he’d gotten confused.
Well, there do seem to be several studies, including at least one cross-cultural study, that support the “the average female sex drive is lower” theory.
These studies also rely on self-reported sexual feelings and behavior, as reported by the subset of the population willing to volunteer for such a study and answer questions such as “How often do you masturbate?”, and right away you’ve got interference from “signalling what you think sounds right”, “signalling what you’re willing to admit,” “signalling what makes you look impressive”, and “signalling what makes you seem good and not deviant by the standards of your culture.” It is notoriously difficult to generalize such studies—they best serve as descriptive accounts, not causal ones.
Many of the relevant factors are also difficult to pin down; testosterone clearly has an affect, but it’s a physiological correlate that doesn’t suffice to explain the patterns seen (which again, are themselves to be taken with a grain of salt, and not signalling anything causal). . The jump to a speculative account of evolutionary sexual strategies is even less warranted. For a good breakdown, see here: http://www.csun.edu/~vcpsy00h/students/sexmotiv.htm
These are valid points, but you said that there still exist several cultures where women are considered to be more sexual than men. Shouldn’t they then show up in the international studies? Or are these cultures so rare as to not be included in the studies?
Also, it occurs to me that whether or not the differences are biological is somewhat of a red herring. If they are mainly cultural, then it means that it will be easier for an FAI to modify them, but that doesn’t affect the primary question of whether they should be modified. Surely that question is entirely independent of the question of their precise causal origin?
An addendum: There’s also the “Ecological fallacy” to consider—where a dataset suggests that on the mean, a population A has property P and population B has P+5, but randomly selecting members of each population will give very different results due to differences in distribution.
These are valid points, but you said that there still exist several cultures where women are considered to be more sexual than men. Shouldn’t they then show up in the international studies? Or are these cultures so rare as to not be included in the studies?
Actually it’s entirely possible to miss a lot of detail while ostensibly sampling broadly. If you sample citizens in Bogota, Mumbai, Taibei, Kuala Lumpur, Ashgabat, Cleveland, Tijuana, Reykjavik, London, and Warsaw, that’s pretty darn international and thus a good cross-cultural representation of humanity, right? Surely any signals that emerge from that dataset are probably at least suggestive of innate human tendency?
Well, actually, no. Those are all major cities deeply influenced and shaped by the same patterns of mercantile-industrialist economics that came out of parts of Eurasia and spread over the globe during the colonial era and continue to do so—and that influence has worked its way into an awful lot of everyday life for most of the people in the world. It would be like assuming that using wheels is a human cultural universal, because of their prevalence.
An even better analogy here would be if you one day take a bit of plant tissue and looking under a microcoscope, spot the mitochondria. Then you find the same thing in animal tissue. When you see it in fungi, too, you start to wonder. You go sampling and sampling all the visible organisms you can find and even ones from far away, and they all share this trait. It’s only Archeans and Bacteria that seem not to. Well, in point of fact there are more types of those than of anything else, significantly more varied and divergent than the other organisms you were looking at put together. It’s not a basal condition for living things, it’s just a trait that’s nearly universal in the ones you’re most likely to notice or think about. (The break in the analogy being that mitochondria are a matter of ancestry and subsequent divergence, while many of the human cultural similarities you’d observe in my above example are a matter of alternatives being winnowed and pushed to the margins, and existing similarities amplified by the effects of a coopting culture-plex that’s come to dominate the picture).
If they are mainly cultural, then it means that it will be easier for an FAI to modify them, but that doesn’t affect the primary question of whether they should be modified. Surely that question is entirely independent of the question of their precise causal origin?
It totally is, but my point was that Eliezer has expressed it’s a matter of biology, and if I’m correct in my thoughts he’s wrong about that—and in my understanding of how he feels FAI would behave, this would lead to the behavior I described (FAI explains to Eliezer that he’s gotten that wrong).
As I mentioned the last time this topic came up, there is evidence that giving supplementary testosterone to humans of either sex tends to raise libido, as many FTM trans people will attest, for example. While there is a lot of individual variation, expecting that on average men will have greater sex drive than women is not based purely on theory.
The pre-Victorian Western perception of female sexuality was largely defined by a bunch of misogynistic Cistercian monks, who, we can be reasonably confident, were not basing their conclusions on a lot of actual experience with women, given that they were cloistered celibates.
I don’t dispute the effects of testosterone; I just don’t think that sex drive is reducible to that, and I tend to be suspicious when evolutionary psychology is proposed for what may just as readily be explained as culture-bound conditions.
It’s not just the frequency of the desire to copulate that matters, after all—data on relative “endurance” and ability to go for another round, certain patterns of rates and types of promiscuity, and other things could as readily be construed to provide a very different model of human sexual evolution, and at the end of the day it’s a lot easier to come up with plausible-sounding models that accord pretty well with one’s biases than be certain we’ve explored the actual space of evolutionary problems and solutions that led to present-day humanity.
I tend to think that evolutionary psychological explanations need to meet the threshold test that they can explain a pattern of behavior better than cultural variance can; biases and behaviors being construed as human nature ought to be based on clearly-defined traits that give reliable signals, and are demonstrable across very different branches of the human cultural tree.
Regarding what EY has proposed that I don’t want, on the catperson post (in a comment), EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn’t appeal to me at all. (Sorry, but I don’t WANT to want more sex.
Look at it this way—would you agree to trade getting a slightly higher sex drive, in exchange for living in a world where rape, divorce, and unwanted long-term celibacy (“forever alone”) are each an order of magnitude rarer than they are in our world?
(That is assuming that such a change in sex drive would have those results, which is far from certain.)
This is an unfair question. If we do the Singularity right, nobody has to accept unwanted brain modifications in order to solve general societal problems. Either we can make the brain modifications appealing via non-invasive education or other gentle means, or we can skip them for people who opt out/don’t opt in. Not futzing with people’s minds against their wills is a pretty big deal! I would be with Aspiring Knitter in opposing a population-wide forcible nudge to sex drive even if I bought the exceptionally dubious proposition that such a drastic measure would be called for to fix the problems you list.
I didn’t mean to imply forcing unwanted modifications on everybody “for their own good”—I was talking about under what conditions we might accept things we don’t like (I don’t think this is a very plausible singularity scenario, except as a general “how weird things could get”).
I don’t like limitations on my ability to let my sheep graze, but I may accept them if everyone does so and it reduces overgrazing. I may not like limits on my ability to own guns, but I may accept them if it means living in a safer society. I may not like modifications to my sex drive, but I may be willing to agree in exchange for living in a better society.
In principle, we could find ways of making everybody better off. Of course, the details of how such an agreement is reached matter a lot—markets, democracy, competition between countries, a machine-God enforcing it’s will.
Since when is rape motivated primarily by not getting laid? (Or divorce, for that matter?)
But never mind. We have different terminal values here. You—I assume—seek a lot of partners for everyone, right? At least, others here seem to be non-monogamous. You won’t agree with me, but I believe in lifelong monogamy or celibacy, so while increasing someone’s libido could be useful in your value system, it almost never would in mine. Further, it would serve no purpose for me to have a greater sex drive because I would respond by trying to stifle it, in accordance with my principles. I hope you at least derive disutility from making someone uncomfortable.
Seriously, the more I hear on LessWrong, the more I anticipate having to live in a savage reservation a la Brave New World. But pointing this out to you doesn’t change your mind because you value having most people be willing to engage in casual sex (am I wrong here? I don’t know you, specifically).
But pointing this out to you doesn’t change your mind because you value having most people be willing to engage in casual sex (am I wrong here? I don’t know you, specifically)
I can’t speak for Emile, but my own views look something like this:
I see nothing wrong with casual sex (as long as all partners fully consent, of course), or any other kind of sex in general (again, assuming fully informed consent).
Some studies (*) have shown that humans are generally pretty poor at monogamy.
People whose sex drives are unsatisfied often become unhappy.
In light of this, forcing monogamy on people is needlessly oppressive, and leads to unnecessary suffering.
Therefore, we should strive toward building a society where monogamy is not forced upon people, and where people’s sex drives are generally satisfied.
Thus, I would say that I value “most people being able to engage in casual sex”. I make no judgement, however, whether “most people should be willing to engage in casual sex”. If you value monogamy, then you should be able to engage in monogamous sex, and I can see no reason why anyone could say that your desires are wrong.
(*) As well as many of our most prominent politicians. Heh.
I’m glad I actually asked, then, since I’ve learned something from your position, which is more sensible than I assumed. Upvoted because it’s so clearly laid out even though I don’t agree.
Oh, sorry, I thought that was obvious. Illusion of transparency, I guess. God says we should be monogamous or celibate. Of course, I doubt it’d be useful to go around trying to police people’s morals.
Sorry, where does God say this? You are a Christian right? I’m not aware of any verse in either the OT or NT that calls for monogamy. Jacob has four wives, Abraham has two, David has quite a few and Solomon has hundreds. The only verses that seem to say anything negative in this regard are some which imply that Solomon just has way too many. The text strongly implies that polyandry is not ok but polygyny is fine. The closest claim is Jesus’s point about how divorcing one woman and then marrying another is adultery, but that’s a much more limited claim (it could be that the other woman was unwilling to be a second wife for example). 1 Timothy chapter 3 lists qualifications for being a church leader which include having only one wife. That would seem to imply that having more than one wife is at worst suboptimal.
That is a really good point. (Actually, Jesus made a stronger point than that: even lusting after someone you’re not married to is adultery.)
You know, you could actually be right. I’ll have to look more carefully. Maybe my understanding has been biased by the culture in which I live. Upvoted for knowledgeable rebuttal of a claim that might not be correct.
Is that something like “Plan to take steps to have sex with the person”, or like “Experience a change in your pants”? (Analogous question for the “no coveting” commandment, too.) Because if you think some thoughts are evil, you really shouldn’t build humans with a brain that automatically thinks them. At least have a little “Free will alert: Experience lust? (Y/n)” box pop up.
I don’t really know if I should say this—whether this is the place, or if the argument’s moved well beyond this point for everyone involved, but: where and when did God say that, and if, as I suspect, it’s the Bible, doesn’t s/he also say we shouldn’t wear clothing of two different kinds of fibre at the same time?
Yes. That applies to the Jews but not to everyone else. You’re allowed to ignore Leviticus and Exodus if you’re not Jewish. EY probably knows this, since it’s actually Jewish theology (note that others have looked at the same facts and come to the conclusion that the rules don’t apply to anyone anymore and stopped applying when Jesus died, so take into account that someone (I don’t think it’s me) has done something wrong here, as per Aumann’s agreement theorem).
Well, I suppose what I should do is comb the Bible for some absurd commandment that does apply to non-Jews, but frankly I’m impressed by the loophole-exploiting nature of your reply, and am inclined to concede the point (also, y’know—researching the Bible… bleh).
EDIT: And by concede the point, I of course mean concede that you’re not locally inconsistent around this point, not that what you said about monogamy is true.
The last time I entered into an earnest discussion of spirituality with a theist friend of mine, what I wanted to bend my brain around was how he could claim to derive his faith from studying the Bible, when (from the few passages I’ve read myself) it’s a text that absolutely does not stand literal interpretation. (For instance, I wanted to know how he reconciled an interest in science, in particular the science of evolution, with a Bible that literally argues for a “young Earth” incompatible with the known duration implied by the fossil and geological records.)
Basically I wanted to know precisely what his belief system consisted of, which was very hard given the many different conceptions of Christianity I bump into. I’ve read “Mere Christianity” on his advice, but I found it far from sufficient—at once way too specific on some points (e.g. a husband should be in charge in a household), and way too slippery on the fundamentals (e.g. what is prayer really about).
I’ve formed my beliefs from a combination of the Bible, asking other Christians, a cursory study of the secular history of the Roman Empire, internet discussions, articles and gut feelings.
That said, if you have specific questions about anything, feel free to ask me.
I’m curious what you think of evidence that early Christianity adopted the date of Christmas and other rituals from pre-existing pagan religions?
ETA: I’m not saying that this would detract from the central Christian message (i.e. Jesus sacrificing himself to redeem our sins). But that sort of memetic infection seems like a strange thing to happen to an objective truth.
I think it indicates that Christians have done stupid things and one must be discerning about traditions rather than blindly accepting everything taught in church as 100% true, and certainly not everything commonly believed by laypersons!
It’s not surprising (unless this is hindsight bias—it might actually BE surprising, considering how unwilling Christians should have been to make compromises like that, but a lot of time passed between Jesus’s death and Christianity taking over Europe, didn’t it?) that humans would be humans. I can see where I might have even considered the same in that situation—everyone likes holidays, everyone should be Christian, pagans get a fun solstice holiday, Christians don’t, this is making people want to be Christian less. Let’s fix it by having our own holiday. At least then we can make it about Jesus, right?
The worship and deification of Mary is similar, which is why I don’t pray to her.
So, suppose I find a church I choose (for whatever reason) to associate with. We seem to agree that I shouldn’t believe everything taught in that church, and I shouldn’t believe everything believed by members of that church… I should compare those teachings and beliefs to my own expectations about and experiences of the world to decide what I believe and what I don’t, just as you have used your own expectations about and experiences of human nature to decide whether to believe various claims about when Jesus was born, what properties Mary had, etc.
So, my own experience of having compared the teachings and beliefs of a couple of churches I was for various reasons associated with to my own expectations about and experiences of the world was that, after doing so, I didn’t believe that Jesus was exceptionally divine or that the New Testament was a particularly reliable source of either moral truths or information about the physical world.
Would you say that I made an error in my evaluations?
Possibly. Or you may be lacking information; if your assumptions were wrong at the beginning and you used good reasoning, you’d come to the wrong conclusion.
Ehh… even when you don’t mean it literally, you probably shouldn’t say such things as “first day as a rationalist”. It’s kind of hard to increase one’s capability for rational thinking without keeping in mind at all times how it’s a many-sided gradient with more than one dimension.
Here’s one:
Let’s say that the world is a simulation AND that strongly godlike AI is possible.
To all intents and purposes, even though the bible in the simulation is provably inconsistent, the existence of a being indistinguishable from the God in such a bible would not be ruled out because though the inhabitants of the world are constrained by the rules of physics in their own state machines or objects or whatever, the universe containing the simulation is subject to it’s own set of physics and logic and therefore may vary even inside the simulation but not be detectable to you or I.
Yes of course this is possible. So is the Tipler scenario. However, the simulation argument just as easily supports any of a vast number of god-theories, of which Christianity is just one of many. That being said, it does support judeo-xian type systems more than say Hindiusm or Vodun.
There may even be economical reasons to create universes like ours, but that’s a very unpopular position on LW.
To me it seems straightforward. Instead of spelling out in detail what rules you should follow in a new situation—say, if the authorities who Paul just got done telling you to obey order you to do something ‘wrong’—this passage gives the general principle that supposedly underlies the rules. That way you can apply it to your particular situation and it’ll tell you all you need to do as a Christian. Paul does seem to think that in his time and place, love requires following a lot of odd rules. But by my reading this only matters if you plan to travel back in time (or if you personally plan to judge the dead).
But I gather that a lot of Christians disagree with me. I don’t know if I understand the objection—possibly they’d argue that we lack the ability to see how the rules follow from loving one’s neighbor, and thus we should expect God to personally spell out every rule-change. (So why tell us that this principle underlies them all?)
Using exegesis (meaning I’m not asking what it says in Greek or how else it might be translated, and I don’t think I need to worry much about cultural norms at the time). But that doesn’t tell you much.
To me it seems straightforward. Instead of spelling out in detail what rules you should follow in a new situation—say, if the authorities who Paul just got done telling you to obey order you to do something ‘wrong’—this passage gives the general principle that supposedly underlies the rules. That way you can apply it to your particular situation and it’ll tell you all you need to do as a Christian.
Yes, I agree. Also, if you didn’t know what love said to do in your situation, the rules would be helpful in figuring it out.
Paul does seem to think that in his time and place, love requires following a lot of odd rules.
That gets into a broader way of understanding the Bible. I don’t know enough about the time and place to talk much about this.
But I gather that a lot of Christians disagree with me. I don’t know if I understand the objection—possibly they’d argue that we lack the ability to see how the rules follow from loving one’s neighbor, and thus we should expect God to personally spell out every rule-change. (So why tell us that this principle underlies them all?)
The objection I can think of is that people might want to argue in favor of being able to do whatever they want, even if it doesn’t follow from God’s commands, and not listen even to God’s explicit prohibitions. Hence, as a general principle, it’s better to obey the rules because more people who object to them (since the New Testament already massively reduces legalism anyway) will be trying to get away with violating the spirit of the rules than will be actually correct in believing that the spirit of the rules is best obeyed by violating the letter of them. Another point would be that if an omniscient being gives you a heuristic, and you are not omniscient, you’d probably do better to follow it than to disregard it.
Given that the context has changed, seems to me omniscience should only matter if God wants to prevent people other than the original audience from misusing or misapplying the rules. (Obviously we’d also need to assume God supplied the rules in the first place!)
Now this does seem like a fairly reasonable assumption, but doesn’t it create a lot of problems for you? If we go that route then it no longer suffices to show or assume that each rule made sense in historical context. Now you need to believe that no possible change would produce better results when we take all time periods into account.
Note that the Noahide laws are the Jewish, not Christian interpretation of this distinction. And there are no sources mentioning them that go back prior to the Jewish/Christian split. (The relevant sections of Talmud are written no earlier than 300 CE.) There’s also some confusion over how those laws work. So for example, one of the seven Noahide prohibitions is the prohibition on illicit relations. But it isn’t clear which prohibited relations are included. There’s an opinion that this includes only adultery and incest and not any of the other Biblical sexual prohibitions (e.g. gay sex, marrying two sisters). There’s a decent halachic argument for something of this form since Jacob marries two sisters. (This actually raises a host of other halachic/theoloical problems for Orthodox Jews because many of them believe that the patriarchs kept all 613 commandments. But this is a further digression...)
And Jesus added the commandment not to lust after anyone you’re not married to and not to divorce.
And I would never have dreamed of the stupidity until someone did it, but someone actually interpreted metaphors from Proverbs literally and concluded that “her husband is praised at the city gates” actually means “women should go to the city limits and hold up signs saying that their husbands are awesome” (which just makes no sense at all). But that doesn’t count because it’s a person being stupid. For one thing, that’s descriptive, not prescriptive, and for another, it’s an illustration of the good things being righteous gets you.
And I would never have dreamed of the stupidity until someone did it, but someone actually interpreted metaphors from Proverbs literally and concluded that “her husband is praised at the city gates” actually means “women should go to the city limits and hold up signs saying that their husbands are awesome”
As a semi-militant atheist, I feel compelled to point out that, from my perspective, all interpretations of Proverbs as a practical guide to modern life look about equally silly...
Upvoted for being the only non-Jew I’ve ever met to know that.
Really? Nearly everyone I grew up with was told that and I assume I wasn’t the only one to remember. I infer that either you don’t know many Christians, the subject hasn’t come up while you were talking to said Christians or Christian culture in your area is far more ignorant of their religious theory and tradition than they are here.
I’ve heard that some rules are specifically supposed to only apply to Jews,¹ and I think most Christians have heard that at some point in their lives, but I don’t think most of them remember having heard that, and very few to know that not wearing clothing of two different kinds of fibre at the same time is one such rules.
I remember Feynman’s WTF reaction in Surely You’re Joking to learning that Jews are not allowed to operate electric switches on Saturdays but they are allowed to pay someone else to do that.
There are different Jewish doctrinal positions on whether shabbos goyim—that is, non-Jews hired to perform tasks on Saturdays that Jews are not permitted to perform—are permissible.
Do I get an upvote, too? I also know about what I should do if I want food I cook to be kosher (though I’m still a bit confused about food containing wheat).
I kew it too,. I thought it was common knowledge among those with any non-trival knowledge of non-folk Christian theology. Which admittedly isn’t a huge subset of the population, but isn’t that small in the west.
Do I get an upvote, too? I also know about what I should do if I want food I cook to be kosher (though I’m still a bit confused about food containing wheat).
I want an upvote too for knowing that if I touch a woman who has her period then I am ‘unclean’. I don’t recall exactly what ‘unclean’ means. I think it’s like ‘cooties’.
Well, I’d lived in Israel for three years, and I did not know about these rules in this much detail, so I feel like I deserve some sort of a downvote :-(
On the morrow, as they went on their journey, and drew nigh unto the city, Peter went up upon the housetop to pray about the sixth hour: And he became very hungry, and would have eaten: but while they made ready, he fell into a trance, And saw heaven opened, and a certain vessel descending upon him, as it had been a great sheet knit at the four corners, and let down to the earth: Wherein were all manner of fourfooted beasts of the earth, and wild beasts, and creeping things, and fowls of the air. And there came a voice to him, Rise, Peter; kill, and eat. But Peter said, Not so, Lord; for I have never eaten any thing that is common or unclean. And the voice spake unto him again the second time, What God hath cleansed, that call not thou common. This was done thrice: and the vessel was received up again into heaven.
If you read the rest of the chapter it’s made clear that the dream is a metaphor for God’s willingness to accept Gentiles as Christians, rather than a specific message about acceptable foods, but abandoning kashrut presumably follows logically from not requiring new Christians to count as Jews first, so.
(Upon rereading this, my first impression is how much creepier slaughtering land animals seems as a metaphor for proselytism than the earlier “fishers of men” stuff; maybe it’s the “go, kill and eat” line or an easier time empathizing with mammals, Idunno. Presumably the way people mentally coded these things in first-century Palestine would differ from today.)
More sex does not have to mean more casual sex. There are lots of people in committed relationships (marriages) that would like to have more-similar sex drives. Nuns wouldn’t want their libido increased, but it’s not only for the benefit of the “playahs” either.
Also, I think the highest-voted comment (“I don’t think that any relationship style is the best (...) However, I do wish that people were more aware of the possibility of polyamory (...)”) is closer to the consensus than something like “everyone should have as many partners as much as possible”. LW does assume that polyamory and casual sex is optional-but-ok, though.
Hmm, that doesn’t sound right. I don’t want to make celibate people uncomfortable, I just want to have more casual sex myself. Also I have a weaker altruistic wish that people who aren’t “getting any” could “get some” without having to tweak their looks (the beauty industry) or their personality (the pickup scene). There could be many ways to make lots of unhappy people happier about sex and romance without tweaking your libido. Tweaking libido sounds a little pointless to me anyway, because PUA dogma (which I mostly agree with) predicts that people will just spend the surplus libido on attractive partners and leave unattractive ones in the dust, like they do today.
625 people (57.3%) described themselves as monogamous, 145 (13.3%) as polyamorous, and 298 (27.3%) didn’t really know. These numbers were similar between men and women.
But never mind. We have different terminal values here. You—I assume—seek a lot of partners for everyone, right?
Nope! I don’t have any certainty about what is best for society / mankind in the long run, but personally, I’m fine with monogamy, I’m married, have a kid, and don’t think “more casual sex” is necessarily a good thing.
I can, however, agree with Eliezer when he says it might be better if human sex drives were better adjusted—not because I value seeing more people screwing around like monkeys, but because it seems that the way things are now results in a great deal of frustration and unhappiness.
I don’t know about rape, but I expect that more sex drive for women and less for men would result in less divorces, because differences in sex drive are a frequent source of friction, as is infidelity (though it’s not clear that different sex drives would result in less infidelity). That’s not to say that hacking people’s brains is the only solution, or the best one.
I’m a married, monogamous person who would love to be able to adjust my sex drive to match my spouse’s (and I think we would both choose to adjust up).
The Twilight books do an interesting riff of the themes of eternal life, monogamy, and extremely high sex drives.
If enough feel similarly, and the discrepancy is real, the means will move toward each through voluntary shifts without forcing anything on anyone, incidentally.
What “voluntary shifts” do you mean? I agree that small shifts in sex drive are possible based on individual choice, but not large ones. Also, why do the means matter?
Ah, misunderstanding. I did not mean “shifts by volition alone”, but “voluntary as opposed to forced” as pertains to AspiringKnitter’s earlier worry about Yudkowsky forcing “some sort of compromise where we lowered male sex drive a little and increased female sex drive a little.”
If interpreted as a prediction rather than a recommendation, it might happen through individual choice if the ability to modify these things directly becomes sufficiently available (and sufficiently safe, and sufficiently accepted, &c) because of impulses like those you expressed: pairings that desire to be monogamous and who are otherwise compatible might choose to self modify to be compatible on this axis as well, and this will move the averages closer together.
I think people’s intuitions about sex drives are interesting, because they seem to differ. Earlier we had a discussion where it became clear that some conceptualized lust as something like hunger—an active harm unless fulfilled—while I had always generalized from one example and assumed lust simpliciter pleasant and merely better when fulfilled. Of course it would be inconvenient for other things if it were constantly present, and were I a Christian of the right type the ideal level would obviously be lower, so this isn’t me at all saying you’re crazy and incomprehensible in some veiled way—I just think these kinds of implicit conceptual differences are interesting.
« EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn’t appeal to me at all. Sorry, but I don’t WANT to want more sex. » Ok, but would you agree to lowering males sex drive then ? Making it easier for those who want to follow a “no sex” path, and lowering the different between males and females in term of sex drive in the process ? Eliezer’s goal was to lower the difference between the desires of the two sex so they could both be happier. He proposed doing it by making them both go towards the average, but aligning to the lower of the two would fit the purpose too.
[EY had] proposed modifications you want to make to people that I don’t want made to me already...
I am actually rather curious to hear more about your opinion on this topic. I personally would jump at the chance to become “better, stronger, faster” (and, of course, smarter), as long as doing so was my own choice. It is very difficult for me to imagine a situation where someone I trust tells me, for example, “this implant is 100% safe, cheap, never breaks down, and will make you think twice as fast, do you want it ?”, and I answer “no thanks”. You obviously disagree, so I’d love to hear your reasoning.
EDIT: Basically, what Cthulhoo said. Sorry Cthulhoo, I didn’t see your comment earlier, somehow.
I was under the impression that your example dealt with a compulsory modification (higher sex drive for all women across the board), which is something I would also oppose; that’s why I specified ”...as long as doing so was my own choice” in my comment. But I am under the impression—and perhaps I’m wrong about this—that you would not choose any sort of a technological enhancement of any of your capabilities. Is that so ? If so, why ?
No. I apologize for being unclear. EY has proposed modifications I don’t want, but that doesn’t mean every modification he supports is one I don’t want. I think I would be more skeptical than most people here, but I wouldn’t refuse all possible enhancements as a matter of principle.
Yay for not acting like EY wants, I guess. No offense or anything, EY, but you’ve proposed modifications you want to make to people that I don’t want made to me already...
I would be very interested in reading your opinion on this subject. There is sometimes a confirmation effect/death spiral inside the LW community, and it would be nice to be exposed to a completely different point of view. I may then modify my beliefs fully, in part or not at all as a consequence, but it’s valuable information for me.
Why did you frame it that way, rather than that AspiringKnitter wasn’t a Christian, or was someone with a long history of trolling, or somesuch? It’s much less likely to get a particular identity right than to establish that a poster is lying about who they are.
Holy crap. I’ve never had a comment downvoted this fast, and I thought this was a pretty funny joke to boot. My mental estimate was that the original comment would end up resting at around +4 or +5. Where did I err?
I left it alone because I have absolutely no idea what you are talking about. Dubstep? Will likes, dislikes and/or does something involving dubstep? (Google tells me it is a kind of dance music.)
(Er, well, math intuitions in a few specific fields, and only one or two rather specific dubstep videos. I’m not, ya know, actually crazy. The important thing is that that video is, as the kids would offensively say, “sicker than Hitler’s kill/death ratio”.) newayz I upvoted your original comment.
That’s remarkably confident. This doesn’t really read like Newsome to me (and how would one find out with sufficient certainty to decide a bet for that much?).
Just how confident is it? It’s a large figure and colloquially people tend to confuse size of bet with degree of confidence—saying a bigger number is more of a dramatic social move. But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
Mitchell’s actual confidence is some unspecified figure between 0.5 and 1 and is heavily influenced by how overconfident he expects others to be.
But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
This would only be true if money had linear utility value [1]. I, for example, would not take a $1000 bet at even odds even if I had 75% confidence of winning, because with my present financial status I just can’t afford to lose $1000. But I would take such a bet of $100.
The utility of winning $1000 is not the negative of the utility of losing $1000.
[1] or, to be precise, if it were approximately linear in the range of current net assets +/- $1000
In a case with extremely asymmetric information like this one they actually are almost the same thing, since the only payoff you can reasonably expect is the rhetorical effect of offering the bet. Offering bets the other party can refuse and the other party has effectively perfect information about can only lose money (if money is the only thing the other party cares about and they act at least vaguely rationally).
Risk aversion and other considerations like gambler’s ruin usually mean that people insist on substantial edges over just >50%. This can be ameliorated by wealth, but as far as I know, Porter is at best middle-class and not, say, a millionaire.
Agree on a trusted third party (gwern, Alicorn, NancyLebowitz … high-karma longtimers who showed up in this thread), and have AK call them on the phone, confirming details, then have the third party confirm that it’s not Will_Newsome.
… though the main problem would be, do people agree to bet before or after AK agrees to such a scheme?
How would gwern, Alicorn or NancyLebowitz confirm that anything I said by phone meant AspiringKnitter isn’t Will Newsome? They could confirm that they talked to a person. How could they confirm that that person had made AspiringKnitter’s posts? How could they determine that that person had not made Will Newsome’s posts?
At the very least, they could dictate an arbitrary passage (or an MD5 hash) to this person who claims to be AK, and ask them to post this passage as a comment on this thread, coming from AK’s account. This would not definitively prove that the person is AK, but it might serve as a strong piece of supporting evidence.
In addition, once the “AK” persona and the “WillNewsome” persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
The problem of determining a person’s identity on the Internet, and doing so in a reasonably safe way, is an interesting challenge. But in practice, I don’t really think it matters that much, in this case. I care about what the “AK” persona writes, not about who they are pretending not to be.
In addition, once the “AK” persona and the “WillNewsome” persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
How about doing this already, with all the stuff they’ve written before the original bet?
I know Will Newsome in real life. If a means of arbitrating this bet is invented, I will identify AspiringKnitter as being him or not by visual or voice for a small cut of the stakes. (If it doesn’t involve using Skype, telephone, or an equivalent, and it’s not dreadfully inconvenient, I’ll do it for free.)
A sidetrack: People seem to be conflating AspiringKnitter’s identity as a Christian and a woman. Female is an important part of not being Will Newsome, but suppose that AspiringKnitter were a male Christian and not Will Newsome. Would that make a difference to any part of this discussion?
More identity issues: My name is Nancy Lebovitz with a v, not a w.
Sorry ’bout the spelling of your name, I wonder if I didn’t make the same mistake before …
Well, the biggest thing AK being a male non-Will Christian would change, is that he would lose an easy way to prove to a third party that he’s not Will Newsome and thus win a thousand bucks (though the important part is not exactly being female, it’s having a recognizably female voice on the phone, which is still pretty highly correlated).
Rationalist lesson that I’ve derived from the frequency that people get my name wrong: It’s typical for people to get it wrong even if I say it more than once, spell it for them, and show it to them in writing. I’m flattered if any of my friends start getting it right in less than a year.
Correct spelling and pronunciation of my name is a simple, well-defined, objective matter, and I’m in there advocating for it, though I cut people slack if they’re emotionally stressed.
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks. Less Wrong has a lot about cognitive biases, but not so much about perceptual biases.
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks.
This is a feature, not a bug. Natural language has lots of redundancy, and if we read one letter at a time rather than in word-sized chunks we would read much more slowly.
I think you have causality reversed here. It’s the redundancy of our languages that’s the “feature”—or, more precisely, the workaround for the previously existing hardware limitation. If our perceptual systems did less “filling in of blanks,” it seems likely that our languages would be less redundant—at least in certain ways.
I think redundancy was originally there to counteract noise, of which there was likely a lot more in the ancestral environment, and as a result there’s more-than-enough of it in such environments as reading text written in a decent typeface one foot away from your face, and the brain can then afford to use it to read much faster. (It’s not that hard to read at 600 words per minute with nearly complete understanding in good conditions, but if someone was able to speak that fast in a not-particularly-quiet environment, I doubt I’d be able to understand much.)
I think it’s time to close out this somewhat underspecified offer of a bet. So far, AspiringKnitter and Eliezer expressed interest but only if a method of resolving the bet could be determined, Alicorn offered to play a role in resolving the bet in return for a share of the winnings, and dlthomas offered up $15.
I will leave the possibility of joining the bet open for another 24 hours, starting from the moment this comment is posted. I won’t look at the site during that time. Then I’ll return, see who (if anyone) still wants a piece of the action, and will also attempt to resolve any remaining conflicts about who gets to participate and on what terms. You are allowed to say “I want to join the bet, but this is conditional upon resolving such-and-such issue of procedure, arbitration, etc.” Those details can be sorted out later. This is just the last chance to shortlist yourself as a potential bettor.
And the winners are… dlthomas, who gets $15, and ITakeBets, who gets $100, for being bold enough to bet unconditionally. I accept their bets, I formally concede them, aaaand we’re done.
You know I followed your talk about betting but never once considered that I could win money for realz if I took you up on it. The difficulty of proving such things made the subject seem just abstract. Oops.
I didn’t exactly realize it, but I reduced the probability. My goal was never to make a bet, my goal was to sockblock Will. But in the end I found his protestations somewhat convincing; he actually sounded for a moment like someone earnestly defending himself, rather than like a joker. And I wasn’t in the mood to re-run my comparison between the Gospel of Will and the Knitter’s Apocryphon. So I tried to retire the bet in a fair way, since having an ostentatious unsubstantiated accusation of sockpuppetry in the air is almost as corrosive to community trust as it is to be beset by the real thing. (ETA: I posted this before I saw Kevin’s comment, by the way!)
“Next time just don’t be a dick and you won’t lose a hundred bucks,” says the unreflective part of my brain whose connotations I don’t necessarily endorse but who I think does have a legitimate point.
Edit: Putting up $100, regardless of anyone else’s participation, and I’m prepared to demonstrate that I’m not Will_Newsome if that is somehow necessary.
Unfortunately, I don’t have the spare money to take the other side of the bet, but Will showed a tendency to head off into foggy abstractions which I haven’t seen in Aspiring Knitter.
Will_Newsome does not seem, one would say, incompetent. I have never read a post by him in which he seemed to be unknowingly committing some faux pas. He should be perfectly capable of suppressing that particular aspect of his posting style.
And what do I have to do to win your bet, given that I’m not him (and hadn’t even heard of him before)? After all, even if you saw me in person, you could claim I was paid off by this guy to pretend to be AspiringKnitter. Or shall I just raise my right hand?
I don’t see why this guy wouldn’t offer such a bet, knowing he can always claim I’m lying if I try to provide proof. No downside, so it doesn’t matter how unlikely it is, he could accuse any given person of sockpuppeting. The expected return can’t be negative. That said, the odds here being worse than one in a million, I don’t know why he went to all that trouble for an expected return of less than a cent. There being no way I can prove who I am, I don’t know why I went to all the trouble of saying this, either, though, so maybe we’re all just a little irrational.
Let’s first confirm that you’re willing to pay up, if you are who I say you are.
That’s problematic since if I were Newsome, I wouldn’t agree. Hence, if AspiringKnitter is Will_Newsome, then AspiringKnitter won’t even agree to pay up.
Not actually being Will_Newsome, I’m having trouble considering what I would do in the case where I turned out to be him. But if I took your bet, I’d agree to it. I can’t see how such a bet could possibly get me anything, though, since I can’t see how I’d prove that I’m not him even though I’m really not him.
All right, how about this. If I presented evidence already in the public domain which made it extremely obvious that you are Will Newsome, would you pay up?
By the way, when I announced my belief about who you are, I didn’t have personal profit in mind. I was just expressing confidence in my reasoning.
All right, how about this. If I presented evidence already in the public domain which made it extremely obvious that you are Will Newsome, would you pay up?
There is no such evidence. What do you have in mind that would prove that?
You write stream-of-consciousness run-on sentences which exhibit abnormal disclosure of self while still actually making sense (if one can be bothered parsing them). Not only do you share this trait with Will, the themes and the phrasing are the same. You have a deep familiarity with LessWrong concerns and modes of thought, yet you also advocate Christian metaphysics and monogamy. Again, that’s Will.
That’s not yet “extremely obvious”, but it should certainly raise suspicions. I expect that a very strong case could be made by detailed textual comparison.
I think if Will knew how to write this non-abstractly, he would have a valuable skill he does not presently possess, and he would use that skill more often.
By the time reflective and wannabe-moral people are done tying themselves up in knots, what they usually communicate is nothing; or, if they do communicate, you can hardly tell them apart from the people who truly can’t.
What I’m saying is that most people who write a Less Wrong comment aren’t totally stressing out about all the tradeoffs that inevitably have to be made in order to say anything at all. There’s a famous quote whose gist is ‘I apologize that this letter is so long, but I didn’t have very much time to write it’. The audience has some large and unknown set of constraints on what they’re willing to glance at, read, take seriously, and so on, and the writer has to put a lot of work into meeting those constraints as effectively as possible. Some tradeoffs are easy to make: yes, a long paragraph is a self-contained stucture, but that’s less important than readibility. Others are a little harder: do I give a drawn-out concrete example of my point, or would that egregiously inflate the length of my comment?
There are also the author’s internal constraints re what they feel they need to say, what they’re willing to say, what they’re willing to say without thinking carefully about whether or not it’s a good idea to say, how much effort they can put into rewriting sentences or linking to relevant papers while their heart’s pumping as if the house is burning down, vague fears of vague consequences, and so on and so forth for as long as the author’s neuroticism or sense of morality allows.
People who are abnormally reflective soon run into meta-level constraints: what does it say about me that I stress out this much at the prospect of being discredited? By meeting these constraints am I supporting the proliferation of a norm that isn’t as good as it would be if I met some other, more psychologically feasible set of constraints? Obviously the pragmatic thing to do is to “just go with it”, but “just going with it” seems to have led to horrifying consequences in the past; why do I expect it to go differently this time?
In the end the author is bound to become self-defeating, dynamically inconsistent. They’ll like as not end up loathing their audience for inadvertently but non-apologetically putting them in such a stressful situation, then loathing themselves for loathing their audience when obviously it’s not the audience’s fault. The end result is a stressful situation where the audience wants to tell the author to do something very obvious, like not stress out about meeting all the constraints they think are important. Unfortunately if you’ve already tied yourself up in knots you don’t generally have a hand available with which to untie them.
ETA: On the positive side they’ll also build a mega-meta-FAI just to escape all these ridiculous double binds. “Ha ha ha, take that, audience! I gave you everything you wanted! Can’t complain now!”
And yet, your g-grandparent comment, about which EY was asking, was brief… which suggests that the process you describe here isn’t always dominant.
Although when asked a question about it, instead of either choosing or refusing to answer the question, you chose to back all the way up and articulate the constraints that underlie the comment.
Hm? I thought I’d answered the question. I.e. I rewrote my original comment roughly the way I’d expect AK to write it, except with my personal concerns about justification and such, which is what Eliezer had asked me to do, ’cuz he wanted more information about whether or not I was AK, so that he could make money off Mitchell Porter. I’m reasonably confident I thwarted his evil plans in that he still doesn’t know to what extent I actually cooperated with him. Eliezer probably knows I’d rather my friends make money off of Mitchell Porter, not Eliezer.
You know, in some ways, that does sound like me, and in some ways it really still doesn’t. Let me first of all congratulate you on being able to alter your style so much. I envy that skill.
What I’m saying is that most people who write a Less Wrong comment aren’t totally stressing out about all the tradeoffs that inevitably have to be made in order to say anything at all.
Your use of “totally” is not the same as my use of “totally”; I think it sounds stupid (personal preference), so if I said it, I would be likely to backspace and write something else. Other than that, I might say something similar.
There’s a famous quote whose gist is ‘I apologize that this letter is so long, but I didn’t have very much time to write it’.
I would have said ” that goes something like” instead of “whose gist is”, but that’s the sort of concept I might well have communicated in roughly the manner I would have communicated it.
The audience has some large and unknown set of constraints on what they’re willing to glance at, read, take seriously, and so on, and the writer has to put a lot of work into meeting those constraints as effectively as possible. Some tradeoffs are easy to make: yes, a long paragraph is a self-contained stucture, but that’s less important than readibility. Others are a little harder: do I give a drawn-out concrete example of my point, or would that egregiously inflate the length of my comment?
An interesting point, and MUCH easier to understand than your original comment in your own style. This conveys the information more clearly.
There are also the author’s internal constraints re what they feel they need to say, what they’re willing to say, what they’re willing to say without thinking carefully about whether or not it’s a good idea to say, how much effort they can put into rewriting sentences or linking to relevant papers while their heart’s pumping as if the house is burning down, vague fears of vague consequences, and so on and so forth for as long as the author’s neuroticism or sense of morality allows.
This has become a run-on sentence. It started like something I would say, but by the end, the sentence is too run-on to be my style. I also don’t use the word “neuroticism”. It’s funny, but I just don’t. I also try to avoid the word “nostrils” for no good reason. In fact, I’m disturbed by having said it as an example of another word I don’t use.
However, this is a LOT closer to my style than your normal writing is. I’m impressed. You’re also much more coherent and interesting this way.
People who are abnormally reflective soon run into meta-level constraints:
I would probably say “exceptionally” or something else other than “abnormally”. I don’t avoid it like “nostrils” or just fail to think of it like “neuroticism”, but I don’t really use that word much. Sometimes I do, but not very often.
what does it say about me that I stress out this much at the prospect of being discredited?
Huh, that’s an interesting thought.
By meeting these constraints am I supporting the proliferation of a norm that isn’t as good as it would be if I met some other, more psychologically feasible set of constraints?
Certainly something I’ve considered. Sometimes in writing or speech, but also in other areas of my life.
Obviously the pragmatic thing to do is to “just go with it”, but “just going with it” seems to have led to horrifying consequences in the past; why do I expect it to go differently this time?
I might have said this, except that I wouldn’t have said the first part because I don’t consider that obvious (or even necessarily true), and I would probably have said “horrific” rather than “horrifying”. I might even have said “bad” rather than either.
In the end the author is bound to become self-defeating,
I would probably have said that “many authors become self-defeating” instead of phrasing it this way.
dynamically inconsistent
Two words I’ve never strung together in my life. This is pure Will. You’re good, but not quite perfect at impersonating me.
They’ll like as not end up loathing their audience for inadvertently but non-apologetically putting them in such a stressful situation, then loathing themselves for loathing their audience when obviously it’s not the audience’s fault.
Huh, interesting. Not quite what I might have said.
The end result is a stressful situation where the audience wants to tell the author to do something very obvious, like not stress out about meeting all the constraints they think are important.
...Why don’t they? Seriously, I dunno if people are usually aware of how uncomfortable they make others.
Unfortunately if you’ve already tied yourself up in knots you don’t generally have a hand available with which to untie them.
I’m afraid I don’t understand.
ETA: On the positive side they’ll also build a mega-meta-FAI just to escape all these ridiculous double binds. “Ha ha ha, take that, audience! I gave you everything you wanted! Can’t complain now!”
And I wouldn’t have said this because I don’t understand it.
Thank you, that was interesting. I should note that I wasn’t honestly trying to sound like you; there was a thousand bucks on the table so I went with some misdirection to make things more interesting. Hence “dynamically inconsistent” and “totally” and so on. I don’t think it had much effect on the bet though.
Yes. Haven’t tried SSRIs yet. Really I just need a regular meditation practice, but there’s a chicken and egg problem of course. Or a prefrontal cortex and prefrontal cortex exercise problem. The solution is obviously “USE MOAR WILLPOWER” but I always forget that or something. Lately I’ve been thinking about simply not sinning, it’s way easier for me to not do things than do things. This tends to have lasting effects and unintended consequences of the sort that have gotten me this far, so I should keep doing it, right? More problems more meta.
IME, more willpower works really poorly as a solution to pretty much anything, for much the same reason that flying works really poorly as a way of getting to my roof. I mean, I suspect that if I could fly, getting to my roof would be very easy, but I can’t fly.
I also find that regular physical exercise and adequate sleep do more to manage my anxiety in the long term (that is, on a scale of months) than anything else I’ve tried.
Have you tried yoga or tai chi as meditation practices? They may be physically complex/challenging enough to distract you (some of the time) from verbally-driven distraction.
I suspect that “not sinning” isn’t simple. How would you define sinning?
Verbally-driven distraction isn’t much of an issue, it’s mostly just getting to the zafu. Once there then even 5 minutes of meditation is enough to calm me down for 30 minutes, which is a pretty big deal. I’m out of practice; I’m confident I can get back into the groove, but first I have actually make it to the zafu more than once every week or two. I think I want to stay with something that I already identify with really powerful positive experiences, i.e. jhana meditation. I may try contemplative prayer at some point for empiricism’s sake.
Re sinning… now that I think about it I’m not sure that I could do much less than I already do. I read a lot and think a lot, and reflectively endorse doing so, mostly. I’m currently writing a Less Wrong comment which is probably a sin, ‘cuz there’s lots of heathens ’round these parts among other reasons. Huh, I guess I’d never thought about demons influencing norms of discourse on a community website before, even though that’s one of the more obvious things to do. Anyway, yah, the positive sins are sorta simplistically killed off in their most obvious forms, except pride I suppose, while the negative ones are endless.
I do meditate at home! “Zafu” means “cushion”. Yeah, I have trouble remembering to walk 10 feet to sit down in a comfortable position on a comfortable cusion instead of being stressed about stuff all day. Brains...
Not sure what the question mark is for. Heathens are bad, it’s probably bad to hang out with them, unless you’re a wannabe saint and are trying to convert them, which I am, but only half-heartedly. Sin is all about contamination, you know? Hence baptism and stuff. Brains...
trying to convert them, which I am, but only half-heartedly.
You are not doing this in any way, shape, or form, unless I missed some post-length or sequence-length argument of yours. (And I don’t mean a “hint” as to what you might believe.) If you have something to say on the topic, you clearly can’t or won’t say it in a comment.
I have to tentatively classify your “trying” as broken signaling (though I notice some confusion on my part). If you were telling the truth about your usual mental state, and not deliberately misleading the reader in some odd way, you’ve likely been trying to signal that you need help.
Sorry, wait, maybe there’s some confusion? Did you interpret me saying “convert” as meaning “convert them to Christianity”? ’Cuz what I meant was convert people to the side of reason more generally, e.g. by occasionally posting totally-non-trolling comments about decision theory and stuff. I’m not a Christian. Or am I misinterpreting you?
I’m not at all trying to signal that I need help, if I seem to be signaling that then it’s an accidental byproduct of some other agenda which is SIGNIFICANTLY MORE MANLYYYY than crying for help.
I’m not at all trying to signal that I need help, if I seem to be signaling that then it’s an accidental byproduct of some other agenda which is SIGNIFICANTLY MORE MANLYYYY than crying for help.
Love the attitude. And for what it’s worth I didn’t infer any signalling of need for help.
Quick response: I saw that you don’t classify your views as Christianity. I do think you classify them as some form of theism, but I took the word “convert” to mean ‘persuade people of whatever the frak you want to say.’
Sorry for the misunderstanding about where you meditate—I’m all too familiar with distraction and habit interfering with valuable self-maintenance.
As for heathens, you’re from a background which is very different from mine. My upbringing was Jewish, but not religiously intense. My family lived in a majority Christian neighborhood.
I suppose it would have been possible to avoid non-Jews, but the social cost would have been very high, and in any case, it was just never considered as an option. To the best of my knowledge, I wasn’t around anyone who saw religious self-segregation as a value. At all. The subject never came up.
I hope I’m not straying into other-optimizing, but I feel compelled to point out that there’s more than one way of being Christian, and not all of them include avoiding socializing with non-Christians.
Ah, I’m not a Christian, and it’s not non-Christians that bother me so much as people who think they know something about how the world works despite, um, not actually knowing much of anything. Inadvertent trolls. My hometown friends are agnostic with one or two exceptions (a close friend of mine is a Catholic, she makes me so proud), my SingInst-related friends are mostly monotheists these days whether they’d admit to it or not I guess but definitely not Christians. I don’t think of for example you as a heathen; there are a lot of intelligent and thoughtful people on this site. I vaguely suspect that they’d fit in better in an intellectual Catholic monastic order, e.g. the Dominicans, but alas it’s hard to say. I’m really lucky to know a handful of thoughtful SingInst-related folk, otherwise I’d probably actually join the Dominicans just to have a somewhat sane peer group. Maybe. My expectations are probably way too high. I might try to convince the Roman Catholic Church to take FAI seriously soon; I actually expect that this will work. They’re so freakin’ reasonable, it’s amazing. Anyway I’m not sure but my point might be that I’m just trying to stay away from people with bad epistemic habits for fear of them contaminating me, like a fundamentalist Christian trying to keep his high epistemic standards amidst a bunch of lions and/or atheists. Better to just stay away from them for the most part. Except hanging out with lions is pretty awesome and saint-worthy whereas hanging out with atheists is just kinda annoying.
Because I’m sinful? And not all of them are heathens, I’m just prone to exaggeration. I think this new AspiringKnitter person is cool, for example; likelihood-ratio-she apparently can supernaturally tell good from bad, which might make my FAI project like a billion times easier, God willing. NancyLebovitz is cool. cousin it is cool. cousin it I can interact with on Facebook but not all of the cool LW people. People talk about me here, I feel compelled to say something for some reason, maybe ’cuz I feel guilty that they’re talking about me and might not realize that I realize that.
Please don’t consider this patronizing but… the writing style of this comment is really cute.
I think you broke whatever part of my brain evaluates people’s signalling. It just gave up and decided your writing is really cute. I really have no idea what impression to form of you; the experience was so unusual that I felt I had to comment.
Thanks to your priming now I can’t see “AspiringKnitter” without mentally replacing it with “AspiringKittens” and a mental image of a Less Wrong meetup of kittens who sincerely want to have better epistemic practices. Way to make the world a better place.
I think I only ever made one argument for Christianity? It was hilarious, everyone was all like WTF!??! and I was like TROLOLOLOL. I wonder if Catholics know that trolling is good, I hear that Zen folk do. Anyway it was naturally a soteriological argument which I intended to be identical to the standard “moral transformation” argument which for naturalists (metaphysiskeptics?) is the easiest of the theories to swallow. If I was expounding my actual thoughts on the matter they would be significantly more sophisticated and subtle and would involve this really interesting part where I talk about “Whose Line Is It Anyway?” and how Jesus is basically like Colin Mochrie specifically during the ‘make stupid noises then we make fun of you for sucking but that redeems the stupid noises’ part. I’m talking about something brilliant that doesn’t exist I’m like Borges LOL!
Local coherence is the hobgoblin of miniscule minds; global coherence is next to godliness.
(ETA: In case anyone can’t tell, I just discovered Dinosaur Comics and, naturally, read through half the archives in one sitting.)
Downvoted, by the way. I want to signal my distaste for being confused for you. Are you using some form of mind-altering substance or are you normally like this? I think you need to take a few steps back. And breathe. And then study how to communicate more clearly, because I think either you’re having trouble communicating or I’m having trouble understanding you.
It would probably require the community stopping feeding the ugly little lump.
Also,
“Mood?” Halleck’s voice betrayed his outrage even through the shield’s filtering. “What has mood to do with it? You downvote when the necessity arises—no matter the mood! Mood’s a thing for cattle or making love or playing the baliset. It’s not for downvoting.”
It would probably require the community stopping feeding the ugly little lump.
We don’t approve of that kind of language used against anyone considered to be of our in-group, no matter how weird they might act. Please delete this.
That is, I would expect a comment of which the Hivemind strongly disapproves to accumulate a negative score over a month-plus.
That’s what I’d expect, as well, though I wish it weren’t so. I usually try to make the effort to upvote or downvote comments based on how informative, well-written, and well-reasoned they are, not whether I agree with them or not (with the exception of poll-style comments). Of course, just because I try to do this, doesn’t mean that I succeed...
For what it’s worth, I agree. Will’s kind of awesome, in a weird way. (Though my first reaction was “Wait, just our in-group? That’s groupist!”) But I’m not nearly as confident in my model of what others approve or disapprove of.
Are you using some form of mind-altering substance[...]?
On second thought maybe I am in a sense; my cortisol (?) levels have been ridiculously high ever since I learned that people have been talking about me here on LW. For about a day before that I’d been rather abnormally happy—my default state matches the negative symptoms of schizophrenia as you’d expect of a prodrome, and “happiness” as such is not an emotion I experience very much at all—which I think combined with the unexpected stressor caused my body to go into freak-out-completely mode, where it remains and probably will remain until I spend time with a close friend. Even so I don’t think this has had as much an effect on my writing style as reading a thousand Dinosaur Comics has.
my default state matches the negative symptoms of schizophrenia...”happiness” as such is not an emotion I experience very much at all
Have you sought professional help in the past? If not, do nothing else until you take some concrete step in that direction. This is an order from your decision theory.
Yes, including from the nice but not particularly insightful folk at UCSF, but negative symptoms generally don’t go away, ever. My brain is pretty messed up. Jhana meditation is wonderful and helps when I can get myself to do it. Technically if I did 60mg of Adderall and stayed up for about 30 to 45 hours then crashed, then repeated the process forever, I think that would overall increase my quality of life, but I’m not particularly confident of that, especially as the outside view says that’s a horrible idea. In my experience it ups the variance which is generally a good thing. Theoretically I could take a bunch of nitrous oxide near the end of the day so as to stay up for only about 24 hours as opposed to 35 before crashing; I’m not sure if I should be thinking “well hell, my dopaminergic system is totally screwed anyway” or “I should preserve what precious little automatic dopaminergic regulation I have left”. In general nobody knows nothin’ ‘bout nothin’, so my stopgap solution is moar meditation and moar meta.
Have you tried doing a detailed analysis of what would make it easier for you to meditate, and then experimenting to find whether you’ve found anything which would actually make it easier? Is keeping your cushion closer to where you usually are a possibility?
Not particularly detailed. It’s hard to do better than convincing my girlfriend to bug me about it a few times a day, which she’s getting better at. I think it’s a gradual process and I’m making progress. I’m sure Eliezer’s problems are quite similar, I suppose I could ask him what self-manipulation tactics he uses besides watching Courage Wolf YouTube videos.
Technically if I did 60mg of Adderall and stayed up for about 30 to 45 hours then crashed, then repeated the process forever, I think that would overall increase my quality of life
I suspect it would, at least in some ways. I’m mentally maybe not too dissimilar, and have done a few months of polyphasic sleeping, supported by caffeine (which I’m way too sensitive to). My mental abilities were pretty much crap, and damn was I agitated, but I was overall happier, baseline at least.
I do recommend 4+ days of sleep deprivation and desperately trying to figure out how an elevator in HL2 works as a short-term treatment for can’t-think-or-talk-but-bored, though.
Are you using some form of mind-altering substance or are you normally like this?
No and no. I’m only like this on Less Wrong. Trust me, I know it doesn’t seem like it, but I’ve thought about this very carefully and thoroughly for a long time. It’s not that I’m having trouble communicating; it’s that I’m not trying to. Not anything on the object level at least. The contents of my comments are more like expressions of complexes of emotions about complex signaling equilibria. In response you may feel very, very compelled to ask: “If you’re not trying to communicate as such then why are you expending your and my effort writing out diatribes?” Trust me, I know it doesn’t seem like it, but I’ve thought about this very carefully and thoroughly for a long time. “I’m going to downvote you anyway; I want to discourage flagrant violations of reasonable social norms of communication.” As expected! I’m clearly not optimizing for karma. And my past selves managed to stock up like 5,000 karma anyway so I have a lot to burn. I understand exactly why you’re downvoting, I have complex intuitions about the moral evidence implicit in your vote, and in recompense I’ll try harder to “be perfect”.
It’s not that I’m having trouble communicating; it’s that I’m not trying to.
So it is more just trolling.
The contents of my comments are more like expressions of complexes of emotions about complex signaling equilibria.
Which, from the various comments Will has made along these lines we can roughly translate to “via incoherent abstract rationalizations Will_Newsome has not only convinced himself that embracing the crazy while on lesswrong is a good idea but that doing so is in fact a moral virtue”. Unfortunately this kind of conviction is highly resistant to persuasion. He is Doing the Right Thing. And he is doing the right thing from within a complex framework wherein not doing the right thing has potentially drastic (quasi-religious-level) consequences. All we can really do is keep the insane subset of his posts voted below the visibility threshold and apply the “don’t feed the troll” policy while he is in that mode.
One of my Facebook activities is “finding bits of Chaitin’s omega”! I am an interesting and complex person! I am nice to my girflriend and she makes good food like fresh pizza! Sometimes I work on FAI stuff, I’m not the best at it but I’m surprisingly okay! I found a way to hack the arithmetical hierarchy using ambient control, it’s really neat, when I tell people about it they go like “WTF that is a really neat idea Will!”! If you’re nice to me maybe I’ll tell you someday? You never know, life is full or surprises allegedly!
This particular post of yours was, last night, at 4 upvotes. Do you have any hypothesis as to why that was the case? I am rather curious as to how that happened.
This particular post of yours was, last night, at 4 upvotes.
An instance of the more general phenomenon. If I recall the grandparent in particular was at about −3 then overnight (wedrifid time) went up to +5 and now seems to be back at −4. Will’s other comments from the time period all experienced a fluctuation of about the same degree. I infer that the fickle bulk upvotes and downvotes are from the same accounts and with somewhat less confidence that they are from the same user.
Do you have any hypothesis as to why that was the case?
It’s possible that the aesthetic only appeals to voters in certain parts of the globe.
Are you saying there is a whole country which supports internet trolls? Forget WMDs, the next war needs to be on the real threat to (the convenience of) civilization!
If I told you that God likes to troll people would that raise your opinion of trolls or lower your opinion of GOD DAMMIT I can’t take it anymore, why does English treat “or” as “xor”? We have “either x or y” for that. Now I have to say “and/or” which looks and is stupid. I refuse.
If I told you that God likes to troll people would that raise your opinion of trolls or lower your opinion of GOD
Which God? If it is Yahweh then that guy’s kind of a dick and I don’t value his opinion much at all. But he isn’t enough of a dick that I can reverse stupidity to arrive at anything useful either.
If I told you that God likes to troll people would that raise your opinion of trolls or lower your opinion of GOD
Neither, really. There are trickster figures all over the place in mythology; it’d take a fairly impressive argument to get me to believe that YHWH is one of them, but assuming such an argument I don’t think it’d imply many updates that “Coyote likes trolling people” (a nearly tautological statement) wouldn’t.
Hm? Even if YHWH existed and was really powerful, you still wouldn’t update much if you found out He likes to troll people? Or does your comment only apply if YHWH is a fiction?
What’s the hypothesis, that the Bible was subtly optimized to bring about Rick Astley and Rickrolling 1,500 or so years later? That… that does seem like His style… I mean obviously the Bible would be optimized to do all kinds of things, but that might be one of the subgoals, you never know.
Aw, wedrifid, that’s mean. :( I was asleep during that time. There’s probably some evidence of that on my Facebook page, i.e. no activity until about like 5 hours ago when I woke up. Also you should know that I’m not so incredibly lame/retarded as to artificially inflate a bunch of comments’ votes for basically no reason other than to provoke accusations that I had done so.
Is it? I didn’t think it was something that you would be offended by. Since the mass voting was up but then back down to where it started it isn’t a misdemeanor so much as it is peculiar and confusing. The only possibility that sprung to mind was that it could be an extension of of your empirical experimentation. You (said that you) actually made a bunch of the comments specifically so that they would get downvotes so that you could see how that influenced the voting behavior of others. Tinkering with said votes to satisfy a further indecipherable curiosity doesn’t seem like all that much of a stretch.
No, not really at all, I was just playing around. I don’t really get offended; I get the impression that you don’t either. And yeah upon reflection your hypothesis was reasonable, I probably only thought it was absurd ‘cuz I have insider knowledge. (ETA: Reasoning about counterfactual states of knowledge is really hard; not only practically speaking ’cuz brains aren’t meant to do that, but theoretically too, which is why people get really confused about anthropics. The latter point deserves a post I mean Facebook status update at some point.)
ETA: Reasoning about counterfactual states of knowledge is really hard; not only practically speaking ’cuz brains aren’t meant to do that, but theoretically too, which is why people get really confused about anthropics. The later point deserves a post I mean Facebook status update at some point.
That’s true. It’s tricky enough that Eliezer seems to get confused about it (or at least I thought he was confusing himself back when he wrote a post or two on the subject.)
I guess that sounds fun? Or why do you think it sounds fun? I think it’d only be worth if if the thread was really public, like when that Givewell dude made that one post about naive EU maximization and charity.
Why does that sound fun? I don’t know. I do know that when I am less-than-lucid, I am liable to lead individuals on conversational wild-goose chases. Within these conversations, I will use a variety of tactics to draw the other partner deeper into the conversation. No tactic in particular is fun, except in-so-far as it confuses the other person. Of course, when I am of sound mind, I do not find this game to be terribly fun.
I assume that you play similar games on Lesswrong. Purposely upvoting one’s own comments in an obvious way, followed by then denying that one did it, seems like a good way to confuse and frustrate other people. I know that if the thought occurred to me when I was less-than-lucid, and if I were the sort of person to play such games on Lesswrong, I probably would try the tactic out.
This seems more likely than you having a cadre of silent, but upvoting, admirers.
Both seem unlikely. I’m still confused. I think God likes trolling, maybe He did it? Not sure what mechanism He’d use though so it’s not a particularly good explanation.
Wedrifid said that too. I don’t have a model that predicts that. I think that most of the time my comments get upvoted to somewhere between 1 and 5 and then drop off as people who aren’t Less Wrong regulars read through; that the reverse would happen for a few hours at least is odd. It’s possible that the not-particularly-intelligent people who normally downvote my posts when they’re insightful also tend to upvote my posts when they’re “worthless”. ETA: thomblake’s hypothesis about regional differences in aesthetics seems more plausible than mine.
Erm. I can’t say that this raises my confidence much. I am reminded of the John McCarthy quote, “Your denial of the importance of objectivity amounts to announcing your intention to lie to us. No-one should believe anything you say.”
I feel responsible for the current wave of gibberish-spam from Will, and I regret that. If it were up to me, I would present him with an ultimatum—either he should promise not to sockpuppet here ever again, and he’d better make it convincing, or else every one of his accounts that can be identified will be banned. The corrosive effect of not knowing whether a new identity is a real person or just Will again, whether he’s “conducting experiments” by secretly mass-upvoting his own comments, etc., to my mind far outweighs the value of his comments.
I freely admit that I have one sockpuppet, who has made less than five comments and has over 20 karma. I do not think that having one sockpuppet for anonymity’s sake is against community norms.
ETA: I mean one sock puppet besides Mitchell Porter obviously.
I freely admit that I have one sockpuppet, who has made less than five comments and has over 20 karma.
I have a private message, dated 7 October, from an account with “less than five comments and [...] over 20 karma”, which begins, “I’m Will_Newsome, this is one of my alts.” (Emphasis mine.)
Will, I’m sorry it’s turning out like this. I am not perfect myself; anyone who cares may look up users “Bananarama” and “OperationPaperclip” and see my own lame anonymous humor. More to the point, I do actually believe that you want to “keep the stars from burning down”, and you’re not just a troll out to waste everyone’s time. The way I see it, because you have neither a job to tie you down, nor genuine intellectual peers and collaborators, it’s easy to end up seeking the way forward via elaborate crazy schemes, hatched and pursued in solitude; and I suspect that I got in the way of one such scheme, by asserting that AK is you.
I have those! E.g. I spend a lot of time with Steve, who is the most rational person in the entire universe, and I hang out with folk like Nick Tarleton and Michael Vassar and stuff. All those 3 people are way smarter than me, though arguably I get around some of that by way of playing to my strengths. The point is that I can play intellectualism with them, especially Steve who’s really good at understanding me. ETA: I also talk to the Black Belt Bayesian himself sorta often.
I suspect that I got in the way of one such scheme, by asserting that AK is you.
Ahhhh, okay, I see why you’d feel bad now I guess? Admittedly I wouldn’t have started commenting recently unless there’d been the confusion of me and AK, but AK isn’t me and my returning was just ’cuz I freaked out that people on LW were talking about me and I didn’t know why. Really I don’t think you’re to blame at all. And thinking AK is me does seem like a pretty reasonable hypothesis. It’s a false hypothesis but not obviously so.
I was only counting alts I’d used in the last few months. I remember having made two alts, but the first one, User:Arbitrarity, I gave up on (I think I’d forgotten about it) which is when I switched to the alt that I used to message you with (apparently I’d remembered it by then, though I wasn’t using it; I just like the word “arbitrarity”).
ETA: Also note that the one substantive comment I made from Arbitrarity has obvious reasons for being kept anonymous.
Anyway I can’t see any plausible reason why you should feel responsible for my current wave of gibberish-spam. [ETA: I mean except for the gibberish-spam I’m writing as a response to your comment; you should maybe feel responsible for that.] My autobiographical memory is admittedly pretty horrible but still.
I don’t follow; your confidence in the value of trolling or your confidence in the general worthwhileness of fairly reading or charitably interpreting my contributions to Less Wrong? ’Cuz I’d given up on the latter a long time ago, but I don’t want your poor impression of me to falsely color your views on the value of trolling.
Eliezer please ban Mitchell Porter, he’s one of my sock puppets and I feel really guilty about it. Yeah I know you’ve known the real Mitchell Porter for like a decade now but I hacked into his account or maybe I bought it from him or something and now it’s just another of my sock puppets, so you know, ban the hell out of him please? It’s only fair. Thx bro!
Thanks! Um do you know any easy way to provide a lot of evidence that I have only one sockpuppet? I’m mildly afraid that Eliezer is going to take Mitchell Porter’s heinous allegations seriously as part of a secret conspiracy is that redundant? fuck. anyway secret conspiracy to discredit me. I am the only one who should be allowed to discredit me!
Um do you know any easy way to provide a lot of evidence that I have only one sockpuppet?
Ask a moderator (or whatever it takes to have access to IP logs) to check to see if there are multiple suspicious accounts from your most common IP. That’s even better than asking you to raise your right hand if you are not lying. It at least shows that you have enough respect for the community to at least try to hide it when you are defecting! :P
I’m confused. What happened overnight that made people suddenly start appreciating Will’s advocacy of his own trolling here and the surrounding context? −5 to +7 is a big change and there have been similar changes to related comments. Either someone is sockpuppeting or people are actually starting to appreciate this crap. (I’m really hoping the former!)
Do you specifically appreciate the advocacy of trolling comments that are the context or are you just saying that you appreciate Will’s actual contributions such as they are?
I often appreciate his contributions as well. He is generally awful at constraining his abstract creativity so as to formulate constructive, concrete ideas but I can constrain abstract creativity just fine so his posts often provoke insights—the rest just bumps up against my nonsense filter. Reading him at his best is a bit like taking a small dose of a hallucinogenic to provide my brain with a dose of raw material to hack away at with logic.
Folks like you might wanna friend me on Facebook, I’m generally a lot more insightful and comprehensible there. I use Facebook like Steven Kaas uses Twitter. https://www.facebook.com/autothexis
Re your other comment re mechanisms for psi, I can’t muster up the energy to reply unfortunately. I’d have to be too careful about keeping levels of organization distinct, which is really easy to do in my head but really hard to write about. I might respond later.
Either someone is sockpuppeting or people are actually starting to appreciate this crap.
Did I say 5 years? Whoops...
Regarding sockpuppeting, that would suck. Can’t someone take a look at the database and figure out if many votes came from the same IP? Even better, when there are cases of weird voting behavior someone should check if the votes came from dummy accounts by looking at the karma score and recent submissions and see if they are close to zero karma and if their recent submissions are similar in style and diction etc.
I think you severely underestimate the value of trolling.
And I suspect you incorrectly classify some of your contributions, placing them into a different subcategory within “willful defiance of the community preference” than where they belong. Unfortunately this means that the subset of your thoughts that are creative, deep and informed rather than just incoherent and flawed tend to be wasted.
My creative, deep, and informed thoughts are a superset of my thoughts in general not a subset wedrifid. Also I do not have any incoherent or flawed thoughts as should be obvious from the previous sentence but I realize that category theory is a difficult subject for many people.
ETA: Okay good, it took awhile for this to get downvoted and I was starting to get even more worried about the local sanity waterline.
Okay good, it took awhile for this to get downvoted and I was starting to get even more worried about the local sanity waterline.
I suspect that the reason for this is that the comment tree of which your post was a branch of is hidden by default, as it originates from a comment with less than −3 karma.
Um, on another note, could you just be less mean? ‘Mean’ seems to be the most accurate descriptor for posting trash that people have to downvote to stay hidden, after all.
I suspect that the reason for this is that the comment tree of which your post was a branch of is hidden by default, as it originates from a comment with less than −3 karma.
No, I ran an actual test by posting messages in all caps to use as a control. Empiricism is so cool! (ETA: I also wrote a perfectly reasonable but mildly complex comment as a second control, which garnered the same number of downvotes as my insane set theory comment in about he same length of time.)
Re meanness, I will consider your request Dorikka. I will consider it.
The problem I have is that you claim to be “not optimising for karma”, but you appear to be “optimising for negative karma”. For example, the parent comment. There are two parts to it; acknowledgement of my comment, and a style that garners downvotes. The second part—why? It doesn’t fit into any other goal structure I can think of; it really only makes sense if you’re explicitly trying to get downvoted.
One of my optimization criteria is discreditable-ness which I guess is sort of like optimizing for downvotes insofar as my audience really cares about credibility. When it comes to motivational dynamics there tends to be a lot of crossing between meta-levels and it’s hard to tell what models are actually very good predictors. You can approximately model the comment you replied to by saying I was optimizing for downvotes, but that model wouldn’t remain accurate if e.g. suddenly Less Wrong suddenly started accepting 4chan-speak. That’s obviously unlikely but the point is that a surface-level model like that doesn’t much help you understand why I say what I say. Not that you should want to understand that.
And my past selves managed to stock up like 5,000 karma anyway so I have a lot to burn.
I’m confused. Have you sockpuppeted before?
The contents of my comments are more like expressions of complexes emotions about complex signaling equilibria.
I think I might understand what you’re saying here, in which case I see… sort of. I think I see what you’re doing but not why you’re doing it. Oh, well. Thank you for the explanation, that makes more sense.
Yes, barely, but I meant “past selves” in the usual Buddhist sense, i.e. I wrote some well-received posts under this account in the past. You might like the irrationality game, I made it for people like you.
On another note I’m sorry that my taste for discreditability has contaminated you by association; a year or so ago I foresaw that such an event would happen and deemed it a necessary tradeoff but naturally I still feel bad about it. I’m also not entirely sure I made the correct tradeoff; morality is hard. I wish I had synderesis.
“Deep familiarity with LessWrong concerns and modes of thought” can be explained by her having lurked a lot, and the rest of those features are not rare IME (even though they are under-represented on LW).
I put some text from recent comments by both AspiringKnitter and Will_Newsome into I write like; it suggested that AspiringKnitter writes “like” Arthur Clarke (2001: A Space Odyssey and other books) while Will_Newsome writes “like” Vladimir Nabokov (Lolita and other books). I’ve never read either, but it does look like a convenient textual comparison doesn’t trivially point to them being the same.
Also, if AspiringKnitter is a sockpuppet, it’s at least an interesting one.
When I put your first paragraph in that confabulator, it says “Vladimir Nabokov”. If I remove the words “Vladimir Nabokov (Lolita and other books)” from the paragraph, it says “H.P. Lovecraft”. It doesn’t seem to cut possible texts into clusters well enough.
I just got H.P. Lovecraft, Dan Brown, and Edgar Allan Poe for three different comments. I am somewhat curious as to whether this page clusters better than random assignment.
ETA: @#%#! I just got Dan Brown again, this time for the last post I wrote. This site is insulting me!
Looks like you are right. Two of my (larger, to give the algorithm more to work with) texts from other sources gave Cory Doctorow (technical piece) and again Lovecraft (a Hacker News comment about drug dogs?)
He can look like a moron or jerk, though, and there is even less risk for you in accepting it: he can necessarily only demand the $1000 from Will_Newsome.
For what it’s worth, I thought Mitchell’s hypothesis seemed crazy at first, then looked through user:AspiringKnitter’s comment history and read a number of things that made me update substantially toward it. (Though I found nothing that made it “extremely obvious”, and it’s hard to weigh this sort of evidence against low priors.)
Out of curiosity, what’s your estimate of the likelihood that you’d update substantially toward a similar hypothesis involving other LW users? …involving other users who have identified as theists or partial theists?
It used to be possible—perhaps it still is? - to make donations to SIAI targeted towards particular proposed research projects. If you are interested in taking up this bet, we should do a side deal whereby, if I win, your $1000 would go to me via SIAI in support of some project that is of mutual interest.
If someone takes the bet and some of the proceeds go to trike, they might agree to check the logs and compare IPs (a matching IP or even a proxy as a detection avoidance attempt could be interpreted as AK=WN). Of course, AK would have to consent.
I’m still surprised that our collective ingenuity has yet to find a practical solution. I don’t think anybody is trying very hard but it’s still surprising how little our knowledge of cryptography and such is helping us.
Anyway yeah, I really don’t think IPs provide much evidence. As wedrifid said if the IPs don’t match it only means that at least I’m putting a minimal amount of effort into anonymity.
I missed where he explicitly made a claim about it one way or the other.
The months went by, and at last on a day of spring Ged returned to the Great
House, and he had no idea what would be asked of him next. At the door that gives on
the path across the fields to Roke Knoll an old man met him, waiting for him in the
doorway. At first Ged did not know him, and then putting his mind to it recalled him as
the one who had let him into the School on the day of his coming, five years ago.
The old man smiled, greeting him by name, and asked, “Do you know who I am?”
Now Ged had thought before of how it was always said, the Nine Masters of Roke,
although he knew only eight: Windkey, Hand, Herbal, Chanter, Changer, Summoner,
Namer, Patterner. It seemed that people spoke of the Archmage as the ninth. Yet when a
new Archmage was chosen, nine Masters met to choose him.
“I think you are the Master Doorkeeper,” said Ged.
“I am. Ged, you won entrance to Roke by saying your name. Now you may win
your freedom of it by saying mine.” So said the old man smiling, and waited. Ged stood
dumb.
He knew a thousand ways and crafts and means for finding out names of things
and of men, of course; such craft was a part of everything he had learned at Roke, for
without it there could be little useful magic done. But to find out the name of a Mage
and Master was another matter. A mage’s name is better hidden than a herring in the
sea, better guarded than a dragon’s den. A prying charm will be met with a stronger
charm, subtle devices will fail, devious inquiries will be deviously thwarted, and force
will be turned ruinously back upon itself.
“You keep a narrow door, Master,” said Ged at last. “I must sit out in the fields
here, I think, and fast till I grow thin enough to slip through”
“As long as you like,” said the Doorkeeper, smiling.
So Ged went off a little way and sat down under an alder on the banks of the
Thwilburn, letting his otak run down to play in the stream and hunt the muddy banks
for creekcrabs. The sun went down, late and bright, for spring was well along. Lights of
lantern and werelight gleamed in the windows of the Great House, and down the hill the
streets of Thwil town filled with darkness. Owls hooted over the roofs and bats flitted in
the dusk air above the stream, and still Ged sat thinking how he might, by force, ruse, or
sorcery, learn the Doorkeeper’s name. The more he pondered the less he saw, among all
the arts of witchcraft he had learned in these five years on Roke, any one that would
serve to wrest such a secret from such a mage.
He lay down in the field and slept under the stars, with the otak nestling in his
pocket. After the sun was up he went, still fasting, to the door of the House and knocked.
The Doorkeeper opened.
“Master,” said Ged, “I cannot take your name from you, not being strong enough,
and I cannot trick your name from you, not being wise enough. So I am content to stay
here, and learn or serve, whatever you will: unless by chance you will answer a question
I have.”
“Ask it.”
“What is your name?”
The Doorkeeper smiled, and said his name: and Ged, repeating it, entered for the
last time into that House.
I simply had not considered the logical implications of AspiringKnitter making the claim that she is not Will_Newsome, and had only noticed that no similar claim had appeared under the name of Will_Newsome.
It would be interesting if one claimed to be them both and the other claimed to be separate people. If Will_Newsome claimed to be both of them and AspiringKnitter did not, then we would know he was lying. So that is something possible to learn from asking Will_Newsome explicitly. I hadn’t considered this when I made my original comment, which was made without thinking deeply.
If WillNewsome claimed to be both of them and AspiringKnitter did not, then we would know he was lying.
Um? Supposing I’d created both accounts, I could certainly claim as Will that both accounts were me, and claim as AK that they weren’t, and in that case Will would be telling the truth.
Oh, so by “Will” you mean “any account controlled by Will” not “the account called Will_Newsome”. I think everyone else interpreted it as the latter.
Nick, it was pretty obvious to me that lessdazed and CuSithBell meant the person Will, not “any account controlled by Will” or “the account called Will_Newsome”—it doesn’t matter if the person would be using an account in order to lie, or an email in order to lie, or Morse code in order to lie, just that they would be lying.
It was “obvious” to me that lessdazed didn’t mean that and it would’ve been obvious to me that CuSithBell did mean that if I hadn’t been primed to interpret his/her comment in the light of lessdazed’s comment. Looking back I’m still not sure what lessdazed intended, but at this point I’m starting to think he/she meant the same as CuSithBell but unfortunately put an underscore betwen “Will” and “Newsome”, confusing the matter.
Oh, so by “Will” you mean “any account controlled by Will” not “the account called Will_Newsome”.
I think everyone else interpreted it the other way.
Well, this was my first post in the thread. I assume you are referring to this post by lessdazed? I thought at the time of my post that lessdazed was using it in the former way (though I’d phrase it “the person Will Newsome”), as you say—either Will lied with the Will account, or told the truth with the Will account and was thus AK, and thus lying with the AK account.
I now think it’s possible that they meant to make neither assumption, instead claiming that if the accounts were inconsistent in this way (if the Will account could not “control” the AK account) then this would indicate that Will (the account and person) was lying about being AK. This claim fails if Will can be expected to engage in deliberate trickery (perhaps inspired by lessdazed’s post), which I think should be a fairly uncontentious assertion.
(Maybe I should point out that this is all academic since at this point both AK and I have denied that we’re the same person, though I’ve been a little bit more coy about it.)
And then he (the person) is lying (also telling the truth, naturally, but I interpreted your claim that he would be telling the truth as a claim that he would not be lying).
This was my initial interpretation as well, but on reflection I think lessdazed meant “ask him if it’s okay if his IP is checked.” Although that puts us in a strange situation in that he’s then able to sabotage the credibility of another member through refusal, but if we don’t require his permission we are perhaps violating his privacy...
Briefly, my impulse was “but how much privacy is lost in demonstrating A is (probably—proxies, etc) not a sock puppet of B”? If there’s no other information leaked, I see no reason to protect against a result of “BAD/NOTBAD” on privacy grounds. However, that is not what we are asking—we’re asking if two posters come from the same IP address. So really, we need to decide whether posters cohabiting should be able to keep that cohabitation private—which seems far more weighty a question.
I probably phrased it wrong. AK does not have to consent, but I would be surprised if the site admins would bother getting in the middle of this silly debate unless both parties ask for it and provide some incentive to do so.
Yes, it may be legal to check people’s IP addresses, but that doesn’t mean it’s morally okay to do so without asking; and if one does check, it’s best to do so privately (i.e. not publicize any identifying information, only the information “yup, it’s the same IP as another user”).
Yes, it may be legal to check people’s IP addresses, but that doesn’t mean it’s morally okay to do so without asking
No, but it still is morally ok. In fact it is usually the use of multiple accounts that is frowned upon, morally questionable or an outright breach of ToS—not the identification thereof.
I don’t think sock puppets are always frowned down upon—if Clippy and QuirinusQuirrel were sock puppets of regular users (I think Quirrell is, but not Clippy), they are “good faith” ones (as long as they don’t double downvote etc.), and I expect “outing” them would be frowned upon.
If AK is a sock puppet, then yeah, it’s something morally questionable the admins should deal with. But I wouldn’t extend that to all sock puppets.
Quirrell overtly claims to be a sock puppet or something like one (it’s kind of complicated), whereas Clippy has been consistent in its claim to be the online avatar of a paperclip-maximizing AI. That said, I think most people here believe (like good Bayesians) that Clippy is more likely to be a sockpuppet of an existing user.
Huh. Can you clarify what is morally questionable about another user posting pseudonymously under the AK account?
For example, suppose hypothetically that I was the user who’d created, and was posting as, AK, and suppose I don’t consider myself to have violated any moral constraints in so doing. What am I missing?
Having multiple sock puppets can be a dishonest way to give the impression that certain views are held by more members than in reality. This isn’t really a problem for novelty sockpuppets (Clippy and Quirrel), since those clearly indicate their status.
What’s also iffy in this case is the possibility of AK lying about who she claims to be, and wasting everybody’s time (which is likely to go hand-in-hand with AK being a sockpuppet of someone else).
If you are posting as AK and are actually female and Christian but would rather that fact not be known about your more famous “TheOtherDave” identity, then I don’t have any objection (as long as you don’t double vote, or show up twice in the same thread to support the same position, etc.).
I can see where double-voting is a problem, both for official votes (e.g., karma-counts) and unofficial ones (e.g., discussions on controversial issues).
I can also see where people lying about their actual demographics, experiences, etc. can be problematic, though of course that’s not limited to sockpuppetry. That is, I might actually be female and Christian, or seventeen and Muslim, or Canadian and Theosophist, or what-have-you, and still only have one account.
Hmm. I am generally a strong supporter of anonymity and pseudonymity. I think we just have to accept that multiple internet folks may come from the same meatspace body. You are right that sockpuppets made for rhetorical purposes are morally questionable, but that’s mostly because rhetoric itself is morally questionable.
My preferred approach is to pretend that names, numbers, and reputations don’t matter. Judge only the work, and not the name attached to it or how many comments claim to like it. Of course this is difficult, like the rest of rationality; we do tend to fail on these by default, but that part is our own problem.
Sockpuppetry and astroturfing is pretty clearly a problem, and being rational is not a complete defense. I’m going to have to think about this problem more, and maybe make a post.
What about if I bet you $500 that you’re not WillNewsome? That way you can prove your separate existence to me, get paid, and I can use the proof you give me to take a thousand from MitchellPorter. In fact, I’ll go as high as 700 dollars if you agree to prove yourself to me and MitchellPorter.
Of course, this offer is isomorphic to you taking Mitchell’s bet and sending 300-500 dollars to me for no reason, and you’re not taking his bet currently, so I don’t expect you to be convinced by this offering either.
What possible proof could I offer you? I can’t take you up on the bet because, while I’m not Newsome, I can’t think of anything I could do that he couldn’t fake if this were a sockpuppet account. If we met in person, I could be the very same person as Newsome anyway; he could really secretly be a she. Or the person you meet could be paid by Newsome to pretend to be AspiringKnitter.
Well, I don’t know what proof you could offer me; but if we genuinely put 500 dollars either way on the line, I am certain we’d rapidly agree on a standard of proof that satisfied us both.
Nope, plenty of people onsite have met Will. I mean, I suppose it is not strictly impossible, but I would be surprised if he were able to present that convincingly as a dude and then later present as convincingly as a girl. Bonus points if you have long hair.
Excellent question. One way to deal with it is for all the relevant agents to agree on a bet that’s actually specified… that is, instead of betting that “AspiringKnitter is/isn’t the same person as WillNewsome,” bet that “two verifiably different people will present themselves to a trusted third party identifying as WillNewsome and AspiringKnitter” and agree on a mechanism of verifying their difference (e.g., Skype).
You’re of course right that these are two different questions, and the latter doesn’t prove the former, but if y’all agree to bet on the latter then the former becomes irrelevant. It would be silly of anyone to agree to the latter if their goal was to establish the former, but my guess is that isn’t actually the goal of anyone involved.
Just in case this matters, I don’t actually care. For all I know, you and shokwave are the same person; it really doesn’t affect my life in any way. This is the Internet, if I’m not willing to take people’s personas at face value, then I do best not to engage with them at all.
I have a general heuristic that making one on one bets is not worthwhile as a way to gain money, as the other party’s willingness to bet indicates they don’t expect to lose money to me. I would also be surprised if a bet of this size, between two members of a rationalist website, paid off to either side (though I guess paying off as a donation to SIAI would not be so surprising). At this point though, I am guessing the bet will not go through.
Was there supposed to be a time limit on that bet offer? It seems like as long as the offer is available you and everyone else will have an incentive not to show all the evidence as a fully-informed betting opponent is less profitable.
Can you please talk more about the word “immortal?” As nothing in physics can make someone immortal, as far as I know, did you mean truly immortal, or long lived, or do you think it likely science will advance and make immortality possible, or what?
Allow me to invent (or put under the microscope a slight, existing) distinction.
“Poorly stated”—not explicit, without fixed meaning. The words written may mean any of several things.
“Poorly worded”—worded so as to mean one thing which is wrong, perhaps even obviously wrong, in which case the writer may intend for people to assume he didn’t mean the obviously wrong thing, but instead meant the less literal, plausibly correct thing.
I have several times criticized the use of the words “immortal” and “immortality” by several people, including EY. I agree with the analysis by Robin Hanson here, in which he argues that the word “immortality” distracts from what people actually intend.
I characterize the use of “immortality” on this site as frequently obviously wrong in many contexts in which it is used, in which it is intended to mean the near thing “living a very long time and not being as fragile as humans are now.” In other words, often it is a poor wording of clear concepts.
I’m not sure if you agree, or instead think that the goal of very long life is unclear, or poorly justified, or just wrong, or perhaps something else.
As far as I understand, EY believes that humans and/or AIs will be able to survive until at least the heat death of the Universe, which would render such entities effectively immortal (i.e., as immortal as it is possible to be). That said, I do agree with your assessment.
If someone believed that no human and/or AI will ever be able to last longer than 1,000 years—perhaps any mind goes mad at that age, or explodes due to a law of the universe dealing with mental entities, or whatever—that person would be lambasted for using “immortal” to mean beings “as immortal as it is possible to be in my opinion.”
It is unfortunate that we don’t have clearer single words for the more plausible, more limited alternatives, closer to
living a very long time and not being as fragile as humans are now.
Come to think of it, if de Grey’s SENS program actually succeeded, we’d get the “living a very long time”
but not the “not being as fragile as humans are now” so we could use terms to distinguish those.
And all of the variations on these are distinct from uploading/ems, with the possibility of distributed backups
Unfortunately, I suspect that neither of these is very likely to ultimately happen.
SENS has curing cancer as a subtask. Uploading/ems requires a scanning technology
fast enough to scan a whole human brain and fine-grained enough to distinguish synapse types.
I think other events will happen first.
So, the next thing, I think you should avoid this religion-topic here. I mean, you are allowed to continue about it, but I fear you are gonna wear yourself out by doing that. I think there are better topics to discuss, where both you and LW have chance to learn new and change their opinions. Learning new is refreshing, discussions about religion rarely are that.
Admittedly, I think that there is no god, but also I’m not thinking anyone here convinces you of that. I think you actually have higher chance of converting someone here than someone here converting you.
So, come, share some of your thoughts about what is LW doing wrong, or just partake discussions here and there you find interesting. Welcome!
I’ll probably just leave soon anyway. Nothing good can come of this.
You guys are fine and all, but I’m not cut out for this. I’m not smart enough or thick-skinned enough or familiar enough with various things to be a part of this community. It’s not you, it’s me, for real, I’m not saying that to make you feel better or something. I’ve only made you all confused and upset, and I know it’s draining for me to participate in these discussions.
Not everyone will be accusatory like nyan_sandwich.
It’s fine, I’m not pitching a fit about a little crudeness. I really can take it… or I can stay involved, but I don’t think I can do both, unlike some people (like maybe you) who are without a doubt better at some things than I am. Don’t blame him for chasing me off, I know the community is welcoming.
And I’m not really looking for reassurance. Maybe I’ll sleep on it for a while, but I really don’t think I’m cut out for this. That’s fine with me, I hope it’s fine with you too. I might try to hang around the HP:MoR thread, I don’t know, but this kind of serious discussion requires skills I just don’t have.
All of that said, I really appreciate that sweet comment. Thank you.
I hope you’re not seeing the options as “keep up with all the threads of this conversation simultaneously” or “quit LW”. It’s perfectly OK to leave things hanging and lurk for a while. (If you’re feeling especially polite, you can even say that you’re tapping out of the conversation for now.)
(Hmm, I might add that advice to the Welcome post...)
But remember, fixing this sort of problem is ostensibly what we’re here for.
Education is ostensibly what high school teachers are there for, but if a student shows up who can’t read, they don’t blame themselves because they’re not there to teach basic skills like that.
Okay, ready to be shouted down. I’ll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I’ll probably just leave soon anyway. Nothing good can come of this. I don’t know why I’m doing this. I shouldn’t be here; you don’t want me here, not to mention I probably shouldn’t bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
I’m Christian and female and don’t want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should.
Interesting, how comfortable are you with the concept of being immortal but being under the yoke of an immortal whimsical tyrant? Do you not see the irony at all? Besides I think you’ll find indefinite life extension as the more appropriate term.
And more disappointingly, confirming what should have been completely off-the-mark predictions about what reception Knitter would get as a Christian. I confess myself surprised.
The boring explanation is that Laoch was taught as the feet of PZ Myers and Hitchens, who operate purely in places open for debate (atheist blogs are not like dinner tables); talk about the arguments of religious people not to them, but to audiences already sympathetic to atheism, and thus care little about principles of charity; and have a beef with religion-as-harmful-organization (e.g. “Hassidic Judaism hurts queers!”) and rather often with religious-people-as-outgroup-members (e.g. “Sally says abortion is murder because she’s trying to manipulate me!”), which interferes with their beef with religion-as-reasoning-mistake (e.g. “Sadi thinks he can derive knowledge in ways that violate thermodynamics!”).
The reading-too-much-HPMOR explanation is that Laoch is an altruistic Slytherin, who wants Knitter to think: “This is a good bunch. Not only are most people nice, but they can swiftly punish jerks. And there are such occasional jerks—I don’t have to feel silly about expecting a completely different reaction than I got, it was because bad apples are noisier.”.
It stands for evaporative cooling and I’m not offended. It’s a pretty valid point.
(Laoch: I expect God not to abuse his power, hence I wouldn’t classify him as a whimsical tyrant. And part of my issue is with being turned into a computer, which sounds even worse than making a computer that acts like me and thinks it is me.)
I can’t decide which of MixedNuts’s hypotheses is more awesome.
(this is totally off-topic, but is there a “watch comment” feature hiddent around the LW UI somewhere ? I am also interested to see AspiringKnitter’s opinion on this subject, but just I know I’ll end up losing track of it without technological assistance...)
Every LW comment has its own RSS feed. You can find it by going to the comment’s permalink URL and then clicking on “Subscribe to RSS Feed” from the right column or by adding ”/.rss” to the end of the aforementioned URL, whichever is easier for you. The grandparent’s RSS feed is here.
For one thing, I’m skeptical that an em would be me, but aware that almost everyone here thinks it would be. If it thought it was me, and they thought it was me, but I was already dead, that would be really bad. And if I somehow wasn’t dead, there could be two of us and both claiming to be the real person. God would never blunder into it by accident believing he was prolonging my life.
And if it really was me, and I really was a computer, whoever made the computer would have access to all of my brain and could embed whatever they wanted in it. I don’t want to be programmed to, just as an implausible example, worship Eliezer Yudkowsky. More plausibly, I don’t want to be modified without my consent, which might be even easier if I were a computer. (For God to do it, it would be no different from the current situation, of course. He has as much access to my brain as he wants.)
And if the computer was not me but was sentient (wouldn’t it be awful if we created nonsentient ems that emulated everyone and ended up with a world populated entirely by beings with no qualia that pretend to be real people?), then I wouldn’t want it to be vulnerable to involuntary modification, either. I’d feel a great deal of responsibility for it if I were alive, and if I were not alive, then it would essentially be the worst of both worlds. God doing this would not expose it to any more risk than all other living beings.
Does this seem rational to you, or have I said something that doesn’t make sense?
I’m going to scoop TheOtherDave on this topic, I hope he doesn’t mind :-/
But first of all, who do you mean by “an em” ? I think I know the answer, but I want to make sure.
If it thought it was me, and they thought it was me, but I was already dead, that would be really bad.
From my perspective, a machine that thinks it is me, and that behaves identically to myself, would, in fact, be myself. Thus, I could not be “already dead” under that scenario, until someone destroys the machine that comprises my body (which they could do with my biological body, as well).
There are two scenarios I can think of that help illustrate my point.
1). Let’s pretend that you and I know each other relatively well, though only through Less Wrong. But tomorrow, aliens abduct me and replace me with a machine that makes the same exact posts as I normally would. If you ask this replica what he ate for breakfast, or how he feels about walks on the beach, or whatever, it will respond exactly as I would have responded. Is there any test you can think of that will tell you whether you’re talking to the real Bugmaster, or the replica ? If the answer is “no”, then how do you know that you aren’t talking to the replica at this very moment ? More importantly, why does it matter ?
2). Let’s say that a person gets into an accident, and loses his arm. But, luckily, our prosthetic technology is superb, and we replace his arm with a perfectly functional prosthesis, indistinguishable from the real arm (in reality, our technology isn’t nearly as good, but we’re getting there). Is the person still human ? Now let’s say that one of his eyes gets damaged, and similarly replaced. Is the person still human ? Now let’s say that the person has epilepsy, but we are able to implant a chip in his brain that will stop the epileptic fits (such implants do, in fact, exist). What if part of the person’s brain gets damaged—let’s say, the part that’s responsible for color perception—but we are able to replace it with a more sophisticated chip. Is the person still human ? At what point do you draw the line from “augmented human” to “inhuman machine”, and why do you draw the line just there and not elsewhere ?
there could be two of us and both claiming to be the real person.
Two copies of me would both be me, though they would soon begin to diverge, since they would have slightly different perceptions of the world. If you don’t believe that two identical twins are the same person, why would you believe that two copies are ?
More plausibly, I don’t want to be modified without my consent, which might be even easier if I were a computer.
Sure, it might be, or it might not; this depends entirely on implementation. Today, there exist some very sophisticated encryption algorithms that safeguard valuable data from modification by third parties; I would assume that your mind would be secured at least as well. On the flip side, your (and mine, and everyone else’s) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won’t necessarily be a step down.
(For God to do it, it would be no different from the current situation, of course. He has as much access to my brain as he wants.)
So, you don’t want your mind to be modified without your consent, but you give unconditional consent to God to do so ?
wouldn’t it be awful if we created nonsentient ems that emulated everyone and ended up with a world populated entirely by beings with no qualia that pretend to be real people ?
I personally would answer “no”, because I believe that the concept of qualia is a bit of a red herring. I might be in the minority on this one, though.
An em would be a computer program meant to emulate a person’s brain and mind.
From my perspective, a machine that thinks it is me, and that behaves identically to myself, would, in fact, be myself. Thus, I could not be “already dead” under that scenario, until someone destroys the machine that comprises my body (which they could do with my biological body, as well).
If you create such a mind that’s just like mine at this very moment, and take both of us and show the construct something, then ask me what you showed the construct, I won’t know the answer. In that sense, it isn’t me. If you then let us meet each other, it could tell me something.
If you ask this replica what he ate for breakfast, or how he feels about walks on the beach, or whatever, it will respond exactly as I would have responded. Is there any test you can think of that will tell you whether you’re talking to the real Bugmaster, or the replica ? If the answer is “no”, then how do you know that you aren’t talking to the replica at this very moment ? More importantly, why does it matter ?
Because this means I could believe that Bugmaster is comfortable and able to communicate with the world via the internet, but it could actually be true that Bugmaster is in an alien jail being tortured. The machine also doesn’t have Bugmaster’s soul—it would be important to ascertain whether or not it did have a soul, though I’d have some trouble figuring out a test for that (but I’m sure I could—I’ve already got ideas, pretty much along the lines of “ask God”)-- and if it doesn’t, then it’s useless to worry about preaching the Gospel to the replica. (It’s probably useless to preach it to Bugmaster anyway, since Bugmaster is almost certainly a very committed atheist.) This has implications for, e.g., reunions after death. Not to mention that if I’m concerned about the state of Bugmaster’s soul, I should worry about Bugmaster in the alien ship. And if both of them (the replica and the real Bugmaster) accept Jesus (a soulless robot couldn’t do that), it’s two souls saved rather than one.
At what point do you draw the line from “augmented human” to “inhuman machine”, and why do you draw the line just there and not elsewhere ?
That’s a really good question. How many grains of sand do you need to remove from a heap of sand for it to stop being a heap? I suppose what matters is whether the soul stays with the body. I don’t know where the line is. I expect there is one, but I don’t know where it is.
Of course, what do we mean by “inhuman machine” in this case? If it truly thought like a human brain, and FELT like a human, was really sentient and not just a good imitation, I’d venture to call it a real person.
Sure, it might be, or it might not; this depends entirely on implementation. Today, there exist some very sophisticated encryption algorithms that safeguard valuable data from modification by third parties; I would assume that your mind would be secured at least as well.
And who does the programming and encrypting? That only one person (who has clearly not respected my wishes to begin with since I don’t want to be a computer, so why should xe start now?) can alter me at will to be xyr peon does not actually make me feel significantly better about the whole thing than if anyone can do it.
So, you don’t want your mind to be modified without your consent, but you give unconditional consent to God to do so ?
I feel like being sarcastic here, but I remembered the inferential distance, so I’ll try not to. There’s a difference between a human, whose extreme vulnerability to corruption has been extensively demonstrated, and who doesn’t know everything, and may or may not love me enough to die for me… and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. This bothers me a lot less than an omniscient person without God’s character. (God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he’d modify a human against the human’s will.)
On the flip side, your (and mine, and everyone else’s) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won’t necessarily be a step down.
True. I consider the risk unacceptably high. I just think it’d be even worse as a computer. We have to practice our critical thinking as well as we can and avoid mind-altering chemicals like drugs and coffee. (I suppose you don’t want to hear me say that we have to pray for discernment, too?) A core tenet of utilitarianism is that we compare possibilities to alternatives. This is bad. The alternatives are worse. Therefore, this the best.
I feel like being sarcastic here, but I remembered the inferential distance, so I’ll try not to. There’s a difference between a human, whose extreme vulnerability to corruption has been extensively demonstrated, and who doesn’t know everything, and may or may not love me enough to die for me… and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. This bothers me a lot less than an omniscient person without God’s character.
I realize that theological debate has a pretty tenuous connection to the changing of minds, but sometimes one is just in the mood.…
Suppose that tonight I lay I minefield all around your house. In the morning, I tell you the minefield is there. Then I send my child to walk through it. My kid gets blown up, but this shows you a safe path out of your house and allows you to go about your business. If I then suggest that you should express your gratitude to me everyday for the rest of your life, would you think that reasonable?.… According to your theology, was hell not created by God?
(God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he’d modify a human against the human’s will.)
I once asked my best friend, who is a devout evangelical, how he could be sure that the words of the Bible as we have it today are correct, given the many iterations of transcription it must have gone through. According to him, God’s general policy of noninterference in free will didn’t preclude divinely inspiring the writers of the Bible to trancribe it inerrantly. At least according to one thesist’s account, then, God was willing to interfere as long it was something really important for man’s salvation. And even if you don’t agree with that particular interpretation, I’d like to hear your explanation how the points at which God “hardened Pharaoh’s heart”, for example, don’t amount to interfering with free will.
I have nothing to say to your first point because I need to think that over and study the relevant theology (I never considered that God made hell and now I need to ascertain whether he did before I respond or even think about responding, a question complicated by being unsure of what hell is). With regard to your second point, however, I must cordially disagree with anyone who espouses the complete inerrancy of all versions of the Bible. (I must disagree less cordially with anyone who espouses the inerrancy of only the King James Version.) I thought it was common knowledge that the King James Version suffered from poor translation and the Vulgate was corrupt. A quick glance at the disagreements even among ancient manuscripts could tell you that.
I suppose if I complain about people with illogical beliefs making Christianity look bad, you’ll think it’s a joke...
I never considered that God made hell and now I need to ascertain whether he did before I respond or even think about responding, a question complicated by being unsure of what hell is
I don’t really have a dog in this race. That said, Matthew 25:41 seems to point in that direction, although “prepared” is perhaps a little weaker than “made”. It does seem to imply control and deliberate choice.
That’s the first passage that comes to mind, anyway. There’s not a whole lot on Hell in the Bible; most of the traditions associated with it are part of folk as opposed to textual Christianity, or are derived from essentially fanfictional works like Dante’s or Milton’s.
The more general problem, of course, is that if you don’t believe in textual inerrancy (of whatever version of the Bible you happen to prefer), you still aren’t relying on God to decide which parts are correct.
As Prismattic said, if you discard inerrancy, you run into the problem of classifications. How do you know which parts of the Bible are literally true, which are metaphorical, and which have been superseded by the newer parts ?
I would also add that our material world contains many things that, while they aren’t as bad as Hell, are still pretty bad. For example, most animals eat each other alive in order to survive (some insects do so in truly terrifying ways); viruses and bacteria ravage huge swaths of the population, human, animal and plant alike; natural disasters routinely cause death and suffering on the global scale, etc. Did God create all these things, as well ?
That’s not a very good argument. “If you accept some parts are metaphorical, how do you know which are?” is, but if you only accept transcription and translation errors, you just treat it like any other historical document.
My bad; for some reason I thought that when AK said,
I must cordially disagree with anyone who espouses the complete inerrancy of all versions of the Bible.
She meant that some parts of the Bible are not meant to be taken literally, but on second reading, it’s obvious that she is only referring to transcription and translation errors, like you said. I stand corrected.
I thought it was common knowledge that the King James Version suffered from poor translation and the Vulgate was corrupt.
Well, that really depends on what your translation criteria are. :) Reading KJV and, say, NIV side-by-side is like hearing Handel in one ear and Creed in the other.
I realize that theological debate has a pretty tenuous connection to the changing of minds, but sometimes one is just in the mood....
When I feel the urge, I go to r/debatereligion. The standards of debate aren’t as high as they are here, of course; but I don’t have to feel guilty about lowering them.
An em would be a computer program meant to emulate a person’s brain and mind.
That’s what I thought, cool.
If you create such a mind that’s just like mine at this very moment, and take both of us and show the construct something, then ask me what you showed the construct, I won’t know the answer. In that sense, it isn’t me.
Agreed; that is similar to what I meant earlier about the copies “diverging”. I don’t see this as problematic, though—after all, there currently exists only one version of me (as far as I know), but that version is changing all the time (even as I type this sentence), and that’s probably a good thing.
Because this means I could believe that Bugmaster is comfortable and able to communicate with the world via the internet, but it could actually be true that Bugmaster is in an alien jail being tortured.
Ok, that’s a very good point; my example was flawed in this regard. I could’ve made the aliens more obviously benign. For example, maybe the biological Bugmaster got hit by a bus, but the aliens snatched up his brain just in time, and transcribed it into a computer. Then they put that computer inside of a perfectly realistic synthetic body, so that neither Bugmaster nor anyone else knows what happened (Bugmaster just thinks he woke up in a hospital, or something). Under these conditions, would it matter to you that you were talking to the replica or the biological Bugmaster ?
But, in the context of my original example, with the (possibly) evil aliens: why aren’t you worried that you are talking to the replica right at this very moment ?
The machine also doesn’t have Bugmaster’s soul—it would be important to ascertain whether or not it did have a soul, though I’d have some trouble figuring out a test for that (but I’m sure I could—I’ve already got ideas, pretty much along the lines of “ask God”
I agree that the issue of the soul would indeed be very important; if I believed in souls, as well as a God who answers specific questions regarding souls, I would probably be in total agreement with you. I don’t believe in either of those things, though. So I guess my next two questions would be as follows:
a). Can you think of any non-supernatural reasons why an electronic copy of you wouldn’t count as you, and/or b). Is there anything other than faith that causes you to believe that souls exist ?
If the answers to (a) and (b) are both “no”, then we will pretty much have to agree to disagree, since I lack faith, and faith is (probably) impossible to communicate.
It’s probably useless to preach it to Bugmaster anyway, since Bugmaster is almost certainly a very committed atheist.
Well, yes, preaching to me or to any other atheist is very unlikely to work. However, if you manage to find some independently verifiable and faith-independent evidence of God’s (or any god’s) existence, I’d convert in a heartbeat. I confess that I can’t imagine what such evidence would look like, but just because I can’t imagine it doesn’t mean it can’t exist.
If it truly thought like a human brain, and FELT like a human, was really sentient and not just a good imitation, I’d venture to call it a real person.
Do you believe that a machine could, in principle, “feel like a human” without having a soul ? Also, when you say “feel”, are you implying some sort of a supernatural communication channel, or would it be sufficient to observe the subject’s behavior by purely material means (f.ex. by talking to him/it, reading his/its posts, etc.) in order to obtain this feeling ?
And who does the programming and encrypting?
That’s a good point: if you are trusting someone with your mind, how do you know they won’t abuse that trust ? But this question applies to your biological brain, as well, I think. Presumably, there exist people whom you currently trust; couldn’t the person who operates the mind transfer device earn your trust in a similar way ?
That only one person (who has clearly not respected my wishes to begin with since I don’t want to be a computer, so why should xe start now?)
Oh, in that scenario, obviously you shouldn’t trust anyone who wants to upload your mind against your will. I am more interested in finding out why you don’t want to “be a computer” in the first place.
and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. … (God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he’d modify a human against the human’s will.)
You’re probably aware of this already, but just in case: atheists (myself included) would say (at the very minimum) that your first sentence contains logical contradictions, and that your second sentence is contradicted by evidence and most religious literature, even if we assume that God does exist. That is probably a topic for a separate thread, though; I acknowledge that, if I believed what you do about God’s existence and his character, I’d agree with you.
...and avoid mind-altering chemicals like drugs and coffee
Guilty as charged; I’m drinking some coffee right now :-/
I suppose you don’t want to hear me say that we have to pray for discernment, too?
I only want to hear you say things that you actually believe...
That said, let’s assume that your electronic brain would be at least as resistant to outright hacking as your biological one. IMO this is a reasonable assumption, given what we currently know about encryption, and assuming that the person who transferred your brain into the computer is trustworthy. Anyway, let’s assume that this is the case. If your computerized mind under this scenario was able to think faster, and remember more, than your biological mind; wouldn’t that mean that your critical skills would greatly improve ? If so, you would be more resistant to persuasion and indoctrination, not more so.
Agreed; that is similar to what I meant earlier about the copies “diverging”. I don’t see this as problematic, though—after all, there currently exists only one version of me (as far as I know), but that version is changing all the time (even as I type this sentence), and that’s probably a good thing.
Okay, but if both start out as me, how do we determine which one ceases to be me when they diverge? My answer would be the one who was here first is me, which is problematic because I could be a replica, but only conditional on machines having souls or many of my religious beliefs being wrong. (If I learn that I am a replica, I must update on one of those.)
a). Can you think of any non-supernatural reasons why an electronic copy of you wouldn’t count as you, and/or
Besides being electronic and the fact that I might also be currently existing (can there be two ships of Theseus?), no. Oh, wait, yes; it SHOULDN’T count as me if we live in a country which uses deontological morality in its justice system. Which isn’t really the best idea for a justice system anyway, but if so, then it’s hardly fair to treat the construct as me in that case because it can’t take credit or blame for my past actions. For instance, if I commit a crime, it shouldn’t be blamed if it didn’t commit the crime. (If we live in a sensible, consequentialist society, we might still want not to punish it, but if everyone believes it’s me, including it, then I suppose it would make sense to do so. And my behavior would be evidence about what it is likely to do in the future.)
b). Is there anything other than faith that causes you to believe that souls exist ?
If by “faith” you mean “things that follow logically from beliefs about God, the afterlife and the Bible” then no.
Do you believe that a machine could, in principle, “feel like a human” without having a soul ?
No, but it could act like one.
Also, when you say “feel”, are you implying some sort of a supernatural communication channel, or would it be sufficient to observe the subject’s behavior by purely material means (f.ex. by talking to him/it, reading his/its posts, etc.) in order to obtain this feeling ?
When I say “feel like a human” I mean “feel” in the same way that I feel tired, not in the same way that you would be able to perceive that I feel soft. I feel like a human; if you touch me, you’ll notice that I feel a little like bread dough. I cannot perceive this directly, but I can observe things which raise the probability of it.
But something acting like a person is sufficient reason to treat it like one. We should err on the side of extending kindness where it’s not needed, because the alternative is to err on the side of treating people like unfeeling automata.
Presumably, there exist people whom you currently trust;
Since I can think of none that I trust enough to, for instance, let them chain me to the wall of a soundproof cell in the wall of their basement, I feel no compulsion to trust anyone in a situation where I would be even more vulnerable. Trust has limits.
I only want to hear you say things that you actually believe...
I’m past underestimating you enough not to know that. I’m aware that believing something is a necessary condition for saying it; I just don’t know if it’s a sufficient condition.
That said, let’s assume that your electronic brain would be at least as resistant to outright hacking as your biological one. IMO this is a reasonable assumption, given what we currently know about encryption, and assuming that the person who transferred your brain into the computer is trustworthy.
Those are some huge ifs, but okay.
If your computerized mind under this scenario was able to think faster, and remember more, than your biological mind; wouldn’t that mean that your critical skills would greatly improve ? If so, you would be more resistant to persuasion and indoctrination, not more so.
Yes, and if we can prove that my soul would stay with this computer (as opposed to a scenario where it doesn’t but my body and physical brain are killed, sending the real me to heaven about ten decades sooner than I’d like, or a scenario where a computer is made that thinks like me only smarter), and if we assume all the unlikely things stated already, and if I can stay in a corporeal body where I can smell and taste and hear and see and feel (and while we’re at it, can I see and hear and smell better?) and otherwise continue being the normal me in a normal life and normal body (preferably my body; I’m especially partial to my hands), then hey, it sounds neat. That’s just too implausible for real life.
EDIT: oh, and regarding why I’m not worried now, it’s because I think it’s unlikely for it to happen right now.
So if I’m parsing you correctly, you are assuming that if an upload of me is created, Upload_Dave necessarily differs from me in the following ways: it doesn’t have a soul, and consequently is denied the possibility of heaven, it doesn’t have a sense of smell, taste, hearing, sight, or touch, it doesn’t have my hands, or perhaps hands at all, it is easier to hack (that is, to modify without its consent) than my brain is.
Yes?
Yeah, I think if I believed all of that, I also wouldn’t be particularly excited by the notion of uploading.
For my own part, though, those strike me as implausible beliefs.
I’m not exactly sure what your reasons for believing all of that are… they seem to come down to a combination of incredulity (roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties) and that they contradict your existing religious beliefs. Have I understood you?
I can see where, if I had more faith than I do in the idea that computer programs will always be more or less like they are now, and in the idea that what my rabbis taught me when I was a child was a reliable description of the world as it is, those beliefs about computer programs would seem more plausible.
it doesn’t have a soul, and consequently is denied the possibility of heaven
More like “it doesn’t have a soul, therefore there’s nothing to send to heaven”.
(roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties)
I have a great deal of faith in the ability of computer programs to surprise me by using ever-more-sophisticated algorithms for parsing data. I don’t expect them to feel. If I asked a philosopher what it’s like for a bat to be a bat, they’d understand the allusion I’d like to make here, but that’s awfully jargony. Here’s an explanation of the concept I’m trying to convey.
I don’t know whether that’s something you’ve overlooked or whether I’m asking a wrong question.
If it helps, I’ve read Nagel, and would have gotten the bat allusion. (Dan Dennett does a very entertaining riff on “What is it like to bat a bee?” in response.)
But I consider the physics of qualia to be kind of irrelevant to the conversation we’re having.
I mean, I’m willing to concede that in order for a computer program to be a person, it must be able to feel things in italics, and I’m happy to posit that there’s some kind of constraint—label it X for now—such that only X-possessing systems are capable of feeling things in italics.
Now, maybe the physics underlying X is such that only systems made of protoplasm can possess X. This seems an utterly unjustified speculation to me, and no more plausible than speculating that only systems weighing less than a thousand pounds can possess X, or only systems born from wombs can possess X, or any number of similar speculations. But, OK, sure, it’s possible.
So what? If it turns out that a computer has to be made of protoplasm in order to possess X, then it follows that for an upload to be able to feel things in italics, it has to be an upload running on a computer made of protoplasm. OK, that’s fine. It’s just an engineering constraint. It strikes me as a profoundly unlikely one, as I say, but even if it turns out to be true, it doesn’t matter very much.
That’s why I started out by asking you what you thought a computer was. IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.
“IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.”
Does it matter?
What if we can run some bunch of algorithms on a computer that pass the turing test but are provably non-sentient?
When it comes down to it we’re looking for something that can solve generalized problems willingly and won’t deliberately try to kill us.
It’s like the argument against catgirls. Some people would prefer to have human girls/boys but trust me sometimes a catgirl/boy would be better.
1) If we are trying to upload (the context here, if you follow the thread up a bit), then we want the emulations to be alive in whatever senses it is important to us that we are presently alive.
2) If we are building a really powerful optimization process, we want it not to be alive in whatever senses make alive things morally relevant, or we have to consider its desires as well.
OK fair enough if you’re looking for uploads. Personally I don’t care as I take the position that the upload concept isn’t really me, it’s a simulated me in the same way that a “spirit version of me” i.e. soul isn’t really me either.
Please correct my logic if I’m wrong here: in order to take the position that an upload is provably you, the only feasible way to do the test is have other people verify that it’s you. The upload saying it’s you doesn’t cut it and neither does the upload just acting exactly like you cut it. In other words the test for whether an upload is really you doesn’t even require it to be really you just simulate you exactly. Which means that the upload doesn’t need to be sentient.
Please fill in the blanks in my understanding so I can get where you’re coming from (this is a request for information not sarcastic).
I endorse dthomas’ answer in the grandparent; we were talking about uploads.
I have no idea what to do with word “provably” here. It’s not clear to me that I’m provably me right now, or that I’ll be provably me when I wake up tomorrow morning. I don’t know how I would go about proving that I was me, as opposed to being someone else who used my body and acted just like me. I’m not sure the question even makes any sense.
To say that other people’s judgments on the matter define the issue is clearly insufficient. If you put X in a dark cave with no observers for a year, then if X is me then I’ve experienced a year of isolation and if X isn’t me then I haven’t experienced it and if X isn’t anyone then no one has experienced it. The difference between those scenarios does not depend on external observers; if you put me in a dark cave for a year with no observers, I have spent a year in a dark cave.
Mostly, I think that identity is a conceptual node that we attach to certain kinds of complex systems, because our brains are wired that way, but we can in principle decompose identity to component parts—shared memory, continuity of experience, various sorts of physical similarity, etc. -- without anything left over. If a system has all those component parts—it remembers what I remember, it remembers being me, it looks and acts like me, etc. -- then our brains will attach that conceptual node to that system, and we’ll agree that that system is me, and that’s all there is to say about that.
And if a system shares some but not all of those component parts, we may not agree whether that system is me, or we may not be sure if that system is me, or we may decide that it’s mostly me.
Personal identity is similar in this sense to national identity. We all agree that a child born to Spaniards and raised in Spain is Spanish, but is the child of a Spaniard and an Italian who was born in Barcelona and raised in Venice Spanish, or Italian, or neither, or both? There’s no way to study the child to answer that question, because the child’s national identity was never an attribute of the child in the first place.
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know
AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
You make sense. I’m starting to think a computer could potentially be sentient. Isn’t a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
Isn’t a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
I personally believe that humans are likewise machines, generally made of meat, that run “programs”. I put the word “programs” in scare-quotes because our programs are very different in structure from computer programs, though the basic concept is the same.
What we have in common with computers, though, is that our programs are self-modifying. We can learn, and thus change our own code. Thus, I see no categorical difference between humans and computers, though obviously our current computers are far inferior to humans in many (though not all) areas.
That’s a perfectly workable model of a computer for our purposes, though if we were really going to get into this we’d have to further explore what a circuit is.
Personally, I’ve pretty much given up on the word “sentient”… in my experience it connotes far more than it denotes, such that discussions that involve it end up quickly reaching the point where nobody quite knows what they’re talking about, or what talking about it entails. I have the same problem with “qualia” and “soul.” (Then again, I talk comfortably about something being or not being a person, which is just as problematic, so it’s not like I’m consistent about this.)
But that aside, yeah, if any physical thing can be sentient, then I don’t see any principled reason why a computer can’t be. And if I can be implemented in a physical thing at all, then I don’t see any principled reason why I can’t be implemented in a computer.
Also (getting back to an earlier concern you expressed), if I can be implemented in a physical thing, I don’t see any principled reason why I can’t be implemented in two different physical things at the same time.
I agree Dave. Also I’ll go further. For my own personal purposes I care not a whit if a powerful piece of software passes the Turing test, can do cool stuff, won’t kill me but it’s basically an automaton.
I would go one step further, and claim that if a piece of software passes the general Turing test—i.e., if it acts exactly like a human would act in its place—then it is not an automaton.
And I’d say that taking that step is a point of philosophy.
Consider this: I have a dodge durango sitting in my garage.
If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I’d say no, but the point is irrelevant.
Why not, and why is it irrelevant ? For example, if your car gets stolen, and later returned to you, wouldn’t you want to know whether you actually got your own car back ?
I have to admit, your response kind of mystified me, so now I’m intrigued.
No I’d not particularly care if it was my car that was returned to me because it gives me utility and it’s just a thing.
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
In the case of the simulated porn actress, I wouldn’t really care if she was real because her utility for me would be similar to watching a movie. Once done with the simulation she would be shut off.
That said the struggle would be with whether or not she (the catgirl version of porn actress) was truly sentient. If she was truly sentient then I’d be evil in the first place because I’d be coercing her to do evil stuff in my personal simulation but I think there’s no viable way to determine sentience other than “if it walks like a duck and talks like a duck” so we’re back to the beginning again and THUS I say “it’s irrelevant”.
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
My primary concern in a situation like this is that she’d be kidnapped and presumably extremely not happy about that.
If my partner were vaporized in her sleep and then replaced with a perfect simulacrum, well, that’s just teleporting (with less savings on airfare.) If it were a known fact that sometimes people died and were replaced by cylons, finding out someone had been cyloned recently, or that I had, wouldn’t particularly bother me. (I suppose this sounds bold, but I’m almost entirely certain that after teleporters or perfect destructive uploads or whatever were introduced, interaction with early adopters people had known before their “deaths” would rapidly swing intuitions towards personal identity being preserved. I have no idea how human psychology would react to there being multiple copies of people.)
I expect we’d adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
The closest analogy I can think of is if I lived in a culture where families only had one child each, and was suddenly introduced to brothers. It would be strange to find two people who shared parents, a childhood environment, and so forth—attributes I was accustomed to treating as uniquely associated with a person, but it turned out I was wrong to do so. It would be disconcerting, but I expect I’d get used to it.
I expect we’d adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
If you count a fertilized egg as a person, then two identical twins did use to be the same person. :-)
While I don’t doubt that many people would be OK with this I wouldn’t because of the lack of certainty and provability.
My difficulty with this concept goes further.
Since it’s not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?
“Oh but the copies running in a simulation are the same thing as the originals really”, protests the AI after all the humans have been destructively scanned and copied into a simulation...
1) The AI and I agree on what constitutes a person. In that case, the AI doesn’t destroy anything I consider a person.
2) The AI considers X a person, and I don’t. In that case, I’m OK with deleting X, but the AI isn’t.
3) I consider X a person, and the AI doesn’t. In that case, the AI is OK with deleting X, but I’m not.
You’re concerned about scenario #3, but not scenario #2. Yes?
But in scenario #2, if the AI had control, a person’s existence would be preserved, which is the goal you seem to want to achieve.
This only makes sense to me if we assume that I am always better at detecting people than the AI is. But why would we assume that? It seems implausible to me.
Ha Ha. You’re right. Thanks for reflecting that back to me.
Yes if you break apart my argument I’m saying exactly that though I hadn’t broken it down to that extent before.
The last part I disagree with which is that I assume that I’m always better at detecting people than the AI is. Clearly I’m not but in my own personal case I don’t trust it if it disagrees with me because of simple risk management. If it’s wrong and it kills me then resurrects a copy then I have experienced total loss. If it’s right then I’m still alive.
But I don’t know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I’d prefer not to take the risk of personal destruction.
That said if someone chose to destructively scan themselves to upload that would be their personal choice.
Well, I certainly agree that all else being equal we ought not kill X if there’s a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that.
But if for whatever reason I’m in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I’m the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI’s judgment, or less.
And obviously that’s going to depend on the particulars of X, Y, me, and the AI… but it’s certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
I think we’re on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There’s no way to distinguish the two of them from physical properties but there are two of them not one.
Regardless, if you believe they are the same person then you go first through the teleportation device… ;->
In Identity Isn’t In Specific Atoms, Eliezer argued that even from what you called the “physical science perspective,” the two electrons are ontologically the same entity. What do you make of his argument?
What do I make of his argument? Well I’m not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn’t scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even “teleporting” one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it’s just that you cannot even distinguish the original from itself because the states change each time you measure them.
A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.
That said, we’re talking about a single object here. As soon as you go to comparing more than one single object it’s not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.
I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.
From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.
From a compsci perspective, talking about the position and momentum of instances of classes doesn’t make any sense. The two instances of the classes ARE the same because they are logically the same.
Anyways I’ve segwayed here:
Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they’re not a single electron. If one of them is part of e.g. my brain and then it’s swapped out for the other then there’s no longer any way to tell which is which. It’s impossible. And my guess is this is what’s causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn’t mean that there are still two of them, there are now only one and one has been destroyed.
Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it’s the information content that’s important.
For me, sure if my information content lived on that would be better than nothing but it wouldn’t be me.
I wouldn’t take a destructive upload if I didn’t know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn’t cross the street if I didn’t know I wasn’t going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
Exactly. Reasonable assurance is good enough, absolute isn’t necessary.
I’m not willing to be destructively scanned even if a copy of me thinks it’s me, looks like me, and acts like me.
That said I’m willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don’t ask me to do it. And expect a bullet if you try to force me!
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority… for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. … what then?
This is a different point entirely. Sure it’s more efficient to just work with instances of similar objects and I’ve already said elsewhere I’m OK with that if it’s objects.
And if everyone else is OK with being destructively scanned then I guess I’ll have to eke out an existence as a savage. The economy can have my atoms after I’m dead.
Sorry I wasn’t clear—the sack of atoms I had in mind was the one comprising your body, not other objects.
Also, my point is that it’s not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit… yes?
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it’s none of my business.
If what you’re really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we’re not there yet.
I understand that you won’t consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn’t what I asked.
I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
I agree completely that there are two bunches of matter in this scenario. There are also (from what you’re labeling the compsci perspective) two data structures. This is true.
My question is, why should I care? What value does the one on the left have, that the one on the right doesn’t have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people… but what makes that a source of value?
For example: to my way of thinking, what’s valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what’s valuable about the person. There are other things associated with them—for example, a particular set of atoms—but from my perspective that’s pretty valueless. If I lose the atoms while preserving the data, I don’t care. I can always find more atoms; I can always construct a new body. But if I lose the data, that’s the ball game—I can’t reconstruct it.
In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don’t care… I’ve kept what’s valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care… I’ve lost what’s valuable.
So when I look at a system to determine how many people are present in that system, what I’m counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren’t what’s valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what’s valuable about them with one copy of that data… I don’t need to lug a million bundles of atoms around.
So, as I say, that’s me… that’s what I value, and consequently what I think is important to preserve. You think it’s important to preserve the individual bundles, so I assume you value something different.
I understand that you value the information content and I’m OK with your position.
Let’s do another tought experiment then: Say we’re some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn’t want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it’s leisure later then would that be the same thing as the original living people?
I’d argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it’s just a model of them.
Now don’t get me wrong, a model like that would be very valuable, it just wouldn’t be the original.
And yes, of course some people value originals otherwise you wouldn’t have to pay millions of dollars for postage stamps printed in the 1800s even though I’d guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.
In the thought experiment you describe, they’ve preserved the data and not the patterns of interaction (that is, they’ve replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will.
If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control.
I understand that there’s something else in the original that you value, which I don’t… or at least, which I haven’t thought about. I’m trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn’t exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else?
Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process.
Thus far, the only answer I can infer from your responses is that you value being the original… or perhaps being the original, if that’s different… and the value of that doesn’t derive from anything, it’s just a primitive. Is that it?
If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you’d previously thought?
I guess from your perspective you could say that the value of being the original doesn’t derive from anything and it’s just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I’d be disappointed that I wasn’t the original but glad that I had existence.
Agreed. It’s the only way we have of verifying that it’s a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
I’m not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity—the rock doesn’t care, and I’m not convinced a duck would care either. Personal identity, however, is a whole other thing—there’s this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren’t right it’s not the same person. If you make a perfect copy of a person and destroy the original, then it’s the same person. You’ve just teleported them—even if you can see the left over dust from the destruction. Being made of the “same” atoms, after all, has nothing to do with identity—atoms don’t have individual identities.
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”?
If there is some X that differs between those people, such that the label “me” applies to one value of X but not the other value, then talking about which one is “me” makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it.
But if there is no such X, then for as long as we continue talking about which of those people is “me,” we are not talking about anything in the world. Under those circumstances it’s best to set aside the question of which is “me.”
“(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”″
Because we already have a legal precedent. Twins.
Though their memories are very limited they are legally different people.
My position is rightly so.
Identical twins, even at birth, are different people: they’re genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I’m not sure why you’re bringing this up in the first place: legalities don’t help us settle philosophical questions. At best they point to a formalization of the folk solution.
As best I can tell, you’re trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I’m not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I’m not a specialist in this area, though. What’s your reasoning?
Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.
Yes, we have two people after this process has completed… I said that in the first place. What follows from that?
EDIT: Reading your other comments, I think I now understand what you’re getting at.
No, if we’re talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations.
But as soon as the person at those locations start to accumulate independent experiences, then we have two people.
Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven’t created a dozen new people, and if I delete all of those duplicates I haven’t destroyed any people.
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me—it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied—but they’re not identical to each other, since they would start to have different experiences immediately. “Identical” here meaning “that same person as”—not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I’m not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn’t make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral.
Tell me if I’m not being clear.
Regardless of what you believe you’re avoiding the interesting question: if you overwrite your clone’s memories and personality with your own, is that clone the same person as you? If not, what is still different?
I don’t think anyone doubts that a clone of me without my memories is a different person.
No I’d not particularly care if it was my car that was returned to me because it gives me utility and it’s just a thing.
Right, but presumably, you would be unhappy if your Ferrari got stolen and you got a Yaris back. In fact, you might be unhappy even if your Yaris got stolen and you got a Ferrari back—wouldn’t you be ?
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
If the copy was so perfect that you couldn’t tell that it wasn’t your wife, no matter what tests you ran, then would you believe anyone who told you that this being was in fact a copy, and not your wife at all ?
I think there’s no viable way to determine sentience other than “if it walks like a duck and talks like a duck”
I agree (I think), but then I am tempted to conclude that creating fully sentient beings merely for my own amusement is, at best, ethically questionable.
Would I believe? I think the answer would depend on whether I could find the original or not.
I would, however, find it disturbing to be told that the copy was a copy.
And yes, if the beings are fully sentient then yes I agree it’s ethically questionable.
But since we cannot tell then it comes down to the conscience of the individual so I guess I’m evil then.
Would I believe? I think the answer would depend on whether I could find the original or not.
Finding the original, and determining that it is, in fact, the original, would constitute a test you could run to determine whether your current wife is a replica or not. Thus, under our scenario, finding the original would be impossible.
I would, however, find it disturbing to be told that the copy was a copy.
Disturbing how ? Wouldn’t you automatically dismiss the person who tells you this as a crazy person ? If not, why not ?
But since we cannot tell then it comes down to the conscience of the individual so I guess I’m evil then.
Er… ok, that’s good to know. edges away slowly
Personally, if I encountered some beings who appeared to be sentient, I’d find it very difficult to force them to do my bidding (through brute force, or by overwriting their minds, or by any other means). Sure, it’s possible that they’re not really sentient, but why risk it, when the probability of this being the case is so low ?
You’re right. It is impossible to determine that the current copy is the original or not.
“Disturbing how?”
Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I’d be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.
“edges away slowly”
lol. Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function. I would fight to prevent the construction of an AI based on anything but the average utility function of humanity even if it excluded my own maximized utility function because I’m honest enough to say that maximizing my own personal utility function is not in the best interests of humanity.
Even then I believe that producing an AI whose utility function is maximizing the best interests of humanity is incredibly difficult and thus have concluded that created an AI whose definition is just NOT(Unfriendly) and attempting to trade with it is probably far easier. Though I have not read Eliezer’s CEV paper so I require further input.
“difficult to force them to do my bidding”.
I don’t know if you enjoy video games or not. Right now there’s a 1st person shooter called Modern Warfare 3. It’s pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill—are automatons and we know for sure that they’re automatons. Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I’d be really uncomfortable with it.
What if the reconstitution process was so flawless that there was no possible test your wife could run to determine whether or not you’d been teleported in this matter ? Would you still be uncomfortable with the process ? If so, why, and how does it differ from the reversed situation that we discussed previously ?
Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function.
Whoever that Phil guy is, I’m going to walk away briskly from him, as well. Walking backwards. So as not to break the line of sight.
Right now there’s a 1st person shooter called Modern Warfare 3. It’s pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill—are automatons and we know for sure that they’re automatons.
I haven’t played that particular shooter, but I am reasonably certain that these NPCs wouldn’t come anywhere close to passing the Turing Test. Not even the dog version of the Turing Test.
Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
I’m talking exactly about a process that is so flawless you can’t tell the difference.
Where my concern comes from is that if you don’t destroy the original you now have two copies. One is the original (although you can’t tell the difference between the copy and the original) and the other is the copy.
Now where I’m uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Simply speeding the process up perhaps by vaporizing the original doesn’t make the outcome any different, the original is still dead.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I’d still be reluctant to do this myself.
That said, I’d be willing to become a hybrid organism slowly by replacing parts of me and although it wouldn’t be the original me at the end of the total replacement process it would still be the hybrid “me”.
Interesting position on the killing of the NPCs and in terms of usefulness that’s why it doesn’t matter to me if a being is sentient or not in order to meet my definition of AI.
If I make a perfect copy of myself, then at the instant of duplication there exists one person at two locations. A moment later, the entities at those two locations start having non-identical experiences and entering different mental states, and thereby become different people (who aren’t one another, although both of them are me). If prior to duplication I program a device to kill me once and only once, then I die, and I have killed myself, and I continue to live.
I agree that this is a somewhat confusing way of talking, because we’re not used to life and death and identity working that way, but we have a long history of technological innovations changing the way we talk about things.
I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.
I’m talking exactly about a process that is so flawless you can’t tell the difference. Where my concern comes from is that if you don’t destroy the original you now have two copies. One is the original (although you can’t tell the difference between the copy and the original) and the other is the copy.
Now where I’m uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Well, think of it this way: Copy A and Copy B are both Person X. Copy A is then executed. Person X is still alive because Copy B is Person X. Copy A is dead. Nothing inconsistent there—and you have a perfectly fine explanation for the presence of a dead body.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I’d still be reluctant to do this myself.
Interesting position on the killing of the NPCs and in terms of usefulness that’s why it doesn’t matter to me if a being is sentient or not in order to meet my definition of AI.
I don’t think anyone was arguing that the AI needed to be conscious—intelligence and consciousness are orthogonal.
Original Copy A and new Copy B are indeed instances of person X but it’s not a class with two instances as in CompSci 101. The class is Original A and it’s B that is the instance. They are different people.
In order to make them the same person you’d need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they’d be part of the same hybrid entity. But at no point are they the same person.
Original Copy A and new Copy B are indeed instances of person X but it’s not a class with two instances as in CompSci 101. The class is Original A and it’s B that is the instance. They are different people.
I don’t know why it matters which is the original—the only difference between the original and the copy is location. A moment after the copy happens, their mental states begin to diverge because they have different experiences, and they become different people to each other—but they’re both still Person X.
It matters to you if you’re the original and then you are killed.
You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).
Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I’m not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct—though willing to concede if there is ultimately some logical way to prove they are the same person.)
The reason is obvious. If they are the same person and one of them kills someone are both of them guilty?
If one fathers a child, is the child the offspring of both of them?
Because of this I cannot agree beyond saying that the two different people are copies of person x. Even you are prepared to concede that they are different people to each other after the mental states begin to diverge so I can’t close the logical gap why you say they are the same person and not copies of the same person one being the original. You come partway to saying they are different people. Why not come all the way?
I agree with TheOtherDave. If you imagine that we scan someone’s brain and then run one-thousand simulations of them walking around the same environment, all having exactly the same experiences, it doesn’t matter if we turn one of those simulations off. Nobody’s died. What I’m saying is that the person is the mental states, and what it means for two people to be different people is that they have different mental states.
I’m not really sure about the morality of punishing them both for the crimes of one of them, though. On one hand, the one who didn’t do it isn’t the same person as the one who did—they didn’t actually experience committing the murder or whatever. On the other hand, they’re also someone who would have done it in the same circumstances—so they’re dangerous. I don’t know.
it doesn’t matter if we turn one of those simulations off. Nobody’s died.
You are decreasing the amount of that person that exists.
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Well, maybe. But there is a whole universe full of people who will never speak to you again and are left to grieve over your body.
You are decreasing the amount of that person that exists.
Yes, there is a measure of that person’s existence (number of perfect copies) which I’m reducing by deleting a perfect copy of that person. What I’m saying is precisely that I don’t care, because that is not a measure of people I value.
Similarly, if I gain 10 pounds, there’s a measure of my existence (mass) which I thereby increase. I don’t care, because that’s not a measure of people I value.
Neither of those statements is quite true, admittedly. For example, I care about gaining 10 pounds because of knock-on effects—health, vanity, comfort, etc. I care about gaining an identical backup because of knock-on effects—reduced risk of my total destruction, for example. Similarly, I care about gaining a million dollars, I care about gaining the ability to fly, there’s all kinds of things that I care about. But I assume that your point here is not that identical copies are valuable in some sense, but that they are valuable in some special sense, and I just don’t see it.
As far as MWI goes, yes… if you posit a version of many-worlds where the various branches are identical, then I don’t care if you delete half of those identical branches. I do care if you delete me from half of them, because that causes my loved ones in those branches to suffer… or half-suffer, if you like. Also, because the fact that those branches have suddenly become non-identical (since I’m in some and not the others) makes me question the premise that they are identical branches.
You are decreasing the amount of that person that exists.
And this “amount” is measured by the number of simulations? What if one simulation is using double the amount of atoms (e.g. by having thicker transistors), does it count twice as much? What if one simulation double checks each result, and another does not, does it count as two?
All that changes is the amplitude of your existence.
The equivalence between copies spreads across the many-worlds and identical simulations running in the same world, is yet to be proven or disproven—and I expect it won’t be proven or disproven until we have some better understanding about the hard problem of consciousness.
Can’t speak for APMason, but I say it because what matters to me is the information.
If the information is different, and the information constitutes people, then it constitutes different people. If the information is the same, then it’s the same person. If a person doesn’t contain any unique information, whether they live or die doesn’t matter nearly as much to me as if they do.
And to my mind, what the law decides to do is an unrelated issue. The law might decide to hold me accountable for the actions of my 6-month-old, but that doesn’t make us the same person. The law might decide not to hold me accountable for what I did ten years ago, but that doesn’t mean I’m a different person than I was. The law might decide to hold me accountable for what I did ten years ago, but that doesn’t mean I’m the same person I was.
“If the information is different, and the information constitutes people, then it constitutes different people.”
True and therein lies the problem. Let’s do two comparisons:
You have two copies. One the original, the other the copy.
Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.
Now let’s compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of observing the states (either by scanning it or by rebuilding it) changes it and thus on the quantum scale we have two different entities even though they are identical on the macro scale except for position and momentum.
Using your argument that it’s the information content that’s important, they don’t really have any useful differences from an information content especially not on the macro scale but they have significant differences in all of their non useful quantum states. They are physically different entities.
Basically what you’re talking about is using a lossy algorithm to copy the individuals. At the level of detail you care about they are the same. At a higher level of detail they are distinct.
I’m thus uncomfortable with killing one of them and then saying the person still exists.
So, what you value is the information lost during the copy process? That is, we’ve been saying “a perfect copy,” but your concern is that no copy that actually exists could actually be a perfect copy, and the imperfect copies we could actually create aren’t good enough?
Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?
“Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?”
OK I’ve mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.
Since I value my own life I want to be sure that it’s actually me that’s alive if you plan to kill me. Because we’re basically creating an additional copy really quickly and then disposing of the original I have a hard time believing that we’re doing something equivalent to a single copy walking through a gate.
I don’t believe that just the information by itself is enough to answer the question “Is it the original me?” in affirmative. And given that it’s not even all of the information (though is all of the information on the macro scale) I know for a fact we’re doing a lossy copy. The quantum states are possibly irrelevant on a macro scale for determing is (A == B) but since I knew from physics that they’re not exactly equivalent once you go down to the quantum level I just can’t buy into it though things would be murkier if the quantum states were provably identical.
Here’s what I’ve understood; let me know if I’ve misunderstood anything.
Suppose P is a person who was created and preserved in the ordinary way, with no funky hypothetical copy/delete operations involved. There is consequently something about P that you value… call that “something” X for convenience.
If P’ is a duplicate of P, then P’ does not possess X, or at least cannot be demonstrated to possess X.
This only applies to people; non-person objects either do not possess X in the first place, or if they do, it is possible in principle for a duplication process to create a duplicate that also possesses X.
X is preserved for P from one moment/day/year to the next, even though P’s information content—at a macroscopic level, let alone a quantum one—changes over time. I conclude that X does not depend on P’s information content at all, even on a macroscopic level, and all this discussion of preserving quantum states is a red herring.
By similar reasoning, I conclude that X doesn’t depend on atoms, since the atoms of which P is comprised change over time. The same is true of energy levels.
I don’t have any idea of what that X might actually be; since we’ve eliminated from consideration everything about people I’m aware of.
I’m still interested in more details about X, beyond the definitional attribute of “X is that thing P has that P’ doesn’t”, but I no longer believe I can elicit those details through further discussion.
EDIT: Yes, you did understand though I can’t personally say that I’m willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I think it’s an axiomatic difference Dave.
It appears from my side of the table that you’re starting from the axiom that all that’s important is information and that originality and/or physical existence including information means nothing.
And you’re dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between the two copies below the macro scale (and the position is different and the atoms are different—though unidentifiably so other than saying that the count is 2x rather than x of atoms) then it’s impossible to dismiss the question “Am I dying when I do this?” because your are making a lossy copy even from your standpoint. The only get-out clause is to say “it’s a close enough copy because the quantum states and position are irrelevant because we can’t measure the difference between atoms in two identical copies on the macro scale other than saying we’ve now got 2X the same atoms whereas before we had 1X).
It’s exactly analogous to a bacteria budding. The original cell dies and close to an exact copy is budded off a.
If the daughter bacteria were an exact copy of the information content of the original bacteria then you’d have to say from your position that it’s the same bacteria and the original is not dead right? Or maybe you’d say that it doesn’t matter that the original died.
My response to that argument (if it were the line of reasoning you took—is it?) would be that “it matters volitionally—if the original didn’t want to die and it was forced to bud then it’s been killed).
I can’t personally say that I’m willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I did not say the X is a red herring. If you believe I did, I recommend re-reading my comment.
The X is far from being a red herring; rather, the X is precisely what I was trying to elicit details about for a while. (As I said above, I no longer believe I can do so through further discussion.)
But I did say that identity of quantum states is a red herring.
As I said before, I conclude this from the fact that you believe you are the same person you were last year, even though your quantum states aren’t identical. If you believe that X can remain unchanged while Y changes, then you don’t believe that X depends on Y; if you believe that identity can remain unchanged while quantum states change, then you don’t believe that identity depends on quantum states.
To put this another way: if changes in my quantum states are equivalent to my death, then I die constantly and am constantly replaced by new people who aren’t me. This has happened many times in the course of writing this comment. If this is already happening anyway, I don’t see any particular reason to avoid having the new person appear instantaneously in my mom’s house, rather than having it appear in an airplane seat an incremental distance closer to my mom’s house.
Other stuff:
Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.
I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.
I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)
I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)
A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.
“Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.”
OK good to know. I’ll have other questions but I need to mull it over.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.”
I agree with this but I don’t think it supports your line of reasoning. I’ll explain why after my meeting this afternoon.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)”
Interesting. I have a contrary line of argument which I’ll explain this afternoon.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)”
Disagree. Again I’ll explain why later.
“A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.”
Maybe. If you have destructively scanned them then you have killed them so they now no longer exist so that part you have complied perfectly with their wishes from my point of view. But in order to then make a copy, have you asked their permission? Have they signed a contract saying they have given you the right to make copies? Do they even own this right to make copies?
I don’t know.
What I can say is that our differences in opinion here would make a superb science fiction story.
There’s a lot of decent SF on this theme. If you haven’t read John Varley’s Eight Worlds stuff, I recommend it; he has a lot of fun with this. His short stories are better than his novels, IMHO, but harder to find. “Steel Beach” isn’t a bad place to start.
Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn’t really touch much on our points of contention as such. In fact I’d say it steered clear from them since there wasn’t really the concept of uploads etc. Interestingly, I haven’t read anything that really examines closely whether the copied upload really is you. Anyways.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead,
even though the cells currently comprising me aren’t identical to the cells that comprised me then.”
OK I have to say that now I’ve thought it through I think this is a straw man argument that “you’re not the same as you were yesterday” used as a pretext for saying that you’re exactly the same from one moment to the next. It is missing the point entirely.
Although you are legally the same person, it’s true that you’re not exactly physically the same person today as you were yesterday and it’s also true that you have almost none of the original physical matter or cells in you today as you had when you were a child.
That this is true in no way negates the main point: human physical existence at any one point in time does
have continuity. I have some of the same cells I had up to about seven to ten years ago. I have some inert matter in me from the time I was born AND I have continual memories to a greater or lesser extent. This is directly analogous to my position that I posted before about a slow hybridizing transition to machine form before I had even clearly thought this out consciously.
Building a copy of yourself and then destroying the original has no continuity. It’s directly analgous to budding
asexually a new copy of yourself and then imprinting it with your memories and is patently not the same concept as normal human existence. Not even close.
That you and some others might dismiss the differences is fine and if you hypothetically wanted to take the position that killing yourself so that a copy of your mind state could exist indefinitely then I have no problem with that, but it’s patently not the same as the process you, I and everyone else goes through on a day to day basis. It’s a new thing. (Although it’s already been tried in nature as the asexual budding process of bacteria).
I would appreciate, however, that if that is a choice being offered to others, that it is clearly explained to them
what is happening. i.e. physical body death and a copy being resurrected, not that they themselves continue living, because they do not. Whether you consider it irrelevant is besides the point. Volition is very important, but I’ll get to that later.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with.
I would also say that the person survived, which I think you would not agree with.)”
That’s directly analogous to multi worlds interpretation of quantum physics which has multiple timelines.
You could argue from that perspective that death is irrelevant because in an infintude of possibilities
if one of your instances die then you go on existing.
Fine, but it’s not me. I’m mortal and always will be even if some virtual copy of me might not be.
So you guessed correctly, unless we’re using some different definition of “person” (which is likely I think)
then the person did not survive.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not.
It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival.
(For example, if they want to die, then respecting their volition is opposed to their survival.)”
Volition has everything to do with it.
While it’s true that volition is independent of whether they have died or not (agreed),
the reason it’s important is that some people will likely take your position to justify forced
destructive scanning at some point because it’s “less wasteful of resources” or some other pretext.
It’s also particularly important in the case of an AI over which humanity would have no control.
If the AI decides that uploads via destructive scanning are exactly the same thing as the original, and it needs the space for it’s purposes then there is nothing to stop it from just going ahead unless volition is considered to be important.
Here’s a question for you: Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
So here’s a scenario for you given that you think information is the only important thing:
Do you consider a person who has lost much of their memory to be the same person?
What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top. Did they just die? What if it’s someone else’s memories: did they just die?
Here’s yet another scenario. I wonder if you have though about this one:
Scan a person destructively (with their permission).
Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of
them (using “identical” to mean functionally indentical because we can’t get exactly identical as discussed before). Copy the contents of the mindstates into that clone.
Ask yourself this question: How many deaths have taken place here?
I agree that there is physical continuity from moment to moment in typical human existence, and that there is similar continuity with a slow transition to a nonhuman form. I agree that there is no such continuity with an instantaneous copy-and-destroy operation.
I understand that you consider that difference uniquely important, such that I continue living in the first case, and I don’t continue living in the second case.
I infer that you believe in some uniquely important attribute to my self that is preserved by the first process, and not preserved by the second process.
I agree that if a person is being offered a choice, it is important for that person to understand the choice. I’m perfectly content to describe the choice as between the death of one body and the creation of another, on the one hand, and the continued survival of a single body, on the other. I’m perfectly content not to describe the latter process as the continuation of an existing life.
I endorse individuals getting to make informed choices about their continued life, and their continued existence as people, and the parameters of that existence. I endorse respecting both their stated wishes, and (insofar as possible) their volition, and I acknowledge that these can conflict given imperfect information about the world.
Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
Yes. As I say, I endorse respecting individuals’ stated wishes, and I endorse them getting to make informed choices about their continued existence and the parameters of that existence; involuntary destructive scanning interferes with those things. (So does denying people access to destructive scanning.)
Do you consider a person who has lost much of their memory to be the same person?
It depends on what ‘much of’ means. If my body continues to live, but my memories and patterns of interaction cease to exist, I have ceased to exist and I’ve left a living body behind. Partial destruction of those memories and patterns is trickier, though; at some point I cease to exist, but it’s hard to say where that point is.
What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top?
I am content to say I’m the same person now that I was six months ago, so if I am replaced by a backed-up copy of myself from six months ago, I’m content to say that the same person continues to exist (though I have lost potentially valuable experience). That said, I don’t think there’s any real fact of the matter here; it’s not wrong to say that I’m a different person than I was six months ago and that replacing me with my six-month-old memories involves destroying a person.
What if it’s someone else’s memories: did they just die?
If I am replaced by a different person’s memories and patterns of interaction, I cease to exist.
Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using “identical” to mean functionally indentical because we can’t get exactly identical as discussed before). Copy the contents of the mindstates into that clone. How many deaths have taken place here?
Several trillion: each cell in my current body died. I continue to exist. If my clone ever existed, then it has ceased to exist.
Incidentally, I think you’re being a lot more adversarial here than this discussion actually calls for.
Very Good response. I can’t think of anything to disagree with and I don’t think I have anything more to add to the discussion.
My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.
What if you were in a situation where you had a near 100% chance of a seemingly successful destructive upload on the one hand, and a 5% chance of survival without upload on the other? Which would you pick, and how does your answer generalize as the 5% goes up or down?
Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.
Here’s a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not):
Let’s say you could slowly transfer a person into an upload by the following method:
You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining neurons.
Am I dead? Yes but not all of me is and we’re now left with a hybrid being. It’s not completely me, but I’ve not yet been killed by the process and I get to continue to live and think thoughts (even though part of my thoughts are now happening inside something that isn’t me).
Gradually over a process of time (let’s say years rather than days or minutes or seconds) all of the parts of the brain are replaced.
At the end of it I’m still dead, but my memories live on. I did not survive but some part of the hybrid entity I became is alive and I got the chance to be part of that.
Now I know the position you’d take is that speeding that process up is mathematically equivalent.
It isn’t from my perspective. I’m dead instantly and I don’t get the chance to transition my existence in a meaningful way to me.
Sidetracking a little:
I suspect you were comparing your unknown quantity X to some kind of “soul”. I don’t believe in souls. I value being alive and having experiencing and being able to think. To me, dying and then being resurrected on the last day by some superbeing who has rebuilt my atoms using other atoms and then copies my information content into some kind of magical “spirit being” is exactly identical to deconstructing me—killing me—and making a copy even if I took the position that the reconstructed being on “the last day” was me. Which I don’t. As soon as I die that’s me gone, regardless of whether some superbeing reconstructs me later using the same or different atoms (if that were possible).
You’re basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of “isn’t it the information that’s important?”
Not exactly.
I’m concerned that I will die and I’m examining the hyptheses as to why it’s not me that dies. Best as I can come up with the response is “you will die but it doesn’t matter because there’s another identical (or close as possible) copy still around.
As to what you value that I don’t I don’t have an answer. Perhaps a way to elicit the answer would be to ask you the question of why you only value the information and not the physical object also?
I’m not asking why you should value yourself over an exact copy, I’m asking why you do. I’m asking you (over and over) what you value. Which is a different question from why you value whatever that is.
I’ve told you what I value, in this context. I don’t know why I value it, particularly… I could tell various narratives, but I’m not sure I endorse any of them.
As to what you value that I don’t I don’t have an answer.
Is that a typo? What I’ve been trying to elicit is what xxd values here that TheOtherDave doesn’t, not the other way around. But evidently I’ve failed at that… ah well.
Thanks Dave. This has been a very interesting discussion and although I think we can’t close the gap on our positions I’ve really enjoyed it.
To answer your question “what do I value”? I think I answered it already, I valued not being killed.
The difference in our positions appears to be some version “but your information is still around” and my response is “but it’s not me” and your response is “how is it not you?”
I don’t know.
“What is it I value that you don’t?” I don’t know. Maybe I consider myself to be a higher resolution copy or a less lossy copy or something. I can’t put my finger on it because when it comes down to it why do just random quantum states make a difference to me when all the macro information is the same apart from position and perhaps momentum. I don’t really have an answer for that.
I’m not sure I care.
For example if I had my evil way and I went FOOM then part of my optimization process would involve mind control and somewhat deviant roleplay with certain porno actresses. Would I want those actresses to be controlled against their will? Probably not. But at the same time it would be good enough if they were able to simulate being the actresses in a way that I could not tell the difference between the original and the simulated.
You wouldn’t prefer to forego the deviant roleplay for the sake of, y’know, not being evil?
But that’s not the point, I suppose. It sounds like you’d take the Experience Machine offer. I don’t really know what to say to that except that it seems like a whacky utility function.
How is the deviant roleplay being evil if the participants are not being coerced or are catgirls? And if it’s not being evil then how would I be defined as evil just because I (sometimes—not always) like deviant roleplay?
That’s the cruz of my point. I don’t reckon that optimizing humanity’s utility function is the opposite of unfriendly AI (or any individual’s for that matter) and I furthermore reckon that trying to seek that goal is much, much harder than trying to create an AI that at a minimum won’t kill us all AND might trade with us if it wants to.
Oh, sorry, I interpreted the comment incorrectly—for some reason I assumed your plan was to replace the actual porn actresses with compliant simulations. I wasn’t saying the deviancy itself was evil. Remember that the AI doesn’t need to negotiate with you—it’s superintelligent and you’re not. And while creating an AI that just ignores us but still optimises other things, well, it’s possible, but I don’t think it would be easier than creating FAI, and it would be pretty pointless—we want the AI to do something, after all.
Therein lies the crux: you want the AI to do stuff for you.
EDIT: Oh yeah I get you. So it’s by definition evil if I coerce the catgirls by mind control.
I suppose logically I can’t have my cake and eat it since I wouldn’t want my own non-sentient simulation controlled by an evil AI either.
So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I’ve already said that elsewhere that I wouldn’t trust my own function to be the optimal.
I doubt, however, that we’d easily find a candidate function from a single individual for similar reasons.
I think we’ve slightly misunderstood each other. I originally thought you were saying that you wanted to destructively upload porn actresses and then remove sentience so they did as they were told—which is obviously evil. But I now realise you only want to make catgirl copies of porn actresses while leaving the originals intact (?) - the moral character of which depends on things like whether you get the consent of the actresses involved.
But yes! Of course I want the AGI to do something. If it doesn’t do anything, it’s not an AI. It’s not possible to write code that does absolutely nothing. And while building AGI might be a fun albeit stupidly dangerous project to pursue just for the heck of it, the main motivator behind wanting the thing to be created (speaking for myself) is so that it can solve problems, like, say, death and scarcity.
Correct. I (unlike some others) don’t hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind.
Non destructively scanning them and them using the simulated versions for “evil purposes”, however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.
And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
Why is it recursively self-improving if it isn’t doing anything? If my end goal was not to do anything, I certainly don’t need to modify myself in order to achieve that better than I could achieve it now.
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
Newsflash the human body is a machine too! I’m being deliberately antagonist here, it’s so obvious that a human (body and mind are the same thing) is a machine, that it’s irrelevant to even mention it.
Okay, but if both start out as me, how do we determine which one ceases to be me when they diverge?
I would say that they both cease to be you, just as the current, singular “you” ceases to be that specific “you” the instant you see some new sight or think some new thought.
For instance, if I commit a crime, it shouldn’t be blamed if it didn’t commit the crime.
Agreed, though I would put something like, “if a person diverged into two separate versions who then became two separate people, then one version shouldn’t be blamed for the crimes of the other version”.
On a separate note, I’m rather surprised to hear that you prefer consequentialist morality to deontological morality; I was under the impression that most Christians followed the Divine Command model, but it looks like I was wrong.
If by “faith” you mean “things that follow logically from beliefs about God, the afterlife and the Bible” then no.
I mean something like, “whatever it is that causes you to believe in in God, the afterlife, and the Bible in the first place”, but point taken.
When I say “feel like a human” I mean “feel” in the same way that I feel tired...
Ooh, I see, I totally misunderstood what you meant. By feel, you mean “experience feelings”, thus something akin to qualia, right ? But in this case, your next statement is problematic:
But something acting like a person is sufficient reason to treat it like one.
In this case, wouldn’t it make sense to conclude that mind uploading is a perfectly reasonable procedure for anyone (possibly other than yourself) to undergo ? Imagine that Less Wrong was a community where mind uploading was common. Thus, at any given point, you could be talking to a mix of uploaded minds and biological humans; but you’d strive to treat them all the same way, as human, since you don’t know which is which (and it’s considered extremely rude to ask).
This makes sense to me, but this would seem to contradict your earlier statement that you could, in fact, detect whether any particular entity had a soul (by asking God), in which case it might make sense for you to treat soulless people differently regardless of what they acted like.
On the other hand, if you’re willing to treat all people the same way, even if their ensoulment status is in doubt, then why would you not treat yourself the same way, regardless of whether you were using a biological body or an electronic one ?
Since I can think of none that I trust enough to, for instance, let them chain me to the wall of a soundproof cell in the wall of their basement.
Good point. I should point out that some people do trust select individuals to do just that, and many more people trust psychiatrists and neurosurgeons enough to give them at least some control over their minds and brains. That said, the hypothetical technician in charge of uploading your mind would have much greater degree of access than any modern doctor, so your objection makes sense. I personally would likely undergo the procedure anyway, assuming the technician had some way of proving that he has a good track record, but it’s possible I’m just being uncommonly brave (or, more likely, uncommonly foolish).
I’m aware that believing something is a necessary condition for saying it; I just don’t know if it’s a sufficient condition.
Haha yes, that’s a good point, you should probably stick to saying things that are actually relevant to the topic, otherwise we’d never get anywhere :-)
and while we’re at it, can I see and hear and smell better?
FWIW, this is one of the main goals of transhumanists, if I understand them correctly: to be able to experience the world much more fully than their current bodies would allow.
That’s just too implausible for real life.
Oh, I agree (well, except for that whole soul thing, obviously). As I said before, I don’t believe that anything like full mental uploading, not to mention the Singularity, will occur during my lifetime; and I’m not entirely convinced that such things are possible (the Singularity seems especially unlikely). Still, it’s an interesting intellectual exercise.
I typed up a response to this. It wasn’t a great one, but it was okay. Then I hit the wrong button and lost it and I’m not in the mood to write it over again because I woke up early this morning to get fresh milk. (By “fresh” I mean “under a minute from the cow to me”, if you’re wondering why I can’t go shopping at reasonable hours.) It turns out that four hours of sleep will leave you too tired to argue the same point twice.
That said,
On the other hand, if you’re willing to treat all people the same way, even if their ensoulment status is in doubt, then why would you not treat yourself the same way, regardless of whether you were using a biological body or an electronic one ?
Deciding whether or not to get uploaded is a choice I make trying to minimize the risk of dying by accident or creating multiple copies of me. Reacting to other people is a choice I make trying to minimize the risk of accidentally being cruel to someone. No need to act needlessly cruel anyway. Plus it’s good practice, since our justice system won’t decide personhood by asking God...
By “fresh” I mean “under a minute from the cow to me”, if you’re wondering why I can’t go shopping at reasonable hours.
That sounds ecolicious to a city-slicker such as myself, but all right :-)
Deciding whether or not to get uploaded is a choice I make trying to minimize the risk of dying by accident or creating multiple copies of me.
Fair enough, though I would say that if we assume that souls do not exist, then creating copies is not a problem (other than that it might be a drain on resources, etc.), and uploading may actually dramatically decrease your risk of dying. But if we assume that souls do exist, then your objections are perfectly reasonable.
Reacting to other people is a choice I make trying to minimize the risk of accidentally being cruel to someone.
That makes sense, but couldn’t you ask God somehow whether the person you’re talking to has a soul or not, and then act accordingly ? Earlier you indicated that you could do this, but it’s possible I misunderstood.
I apologize; earlier I deliberately glossed over a complicated thought process just to give the conclusion that maybe you could know, as opposed to explaining in full.
God has been known to speak to people through dreams, visions and gut feelings. That doesn’t mean God always answers when I ask questions, which probably has something to do with the weakness of my faith. You could ask and you could try to listen, and if God is willing to answer, and if you don’t ignore obvious evidence due to your own biases*, you could get an answer. But God has for whatever reason chosen to be rather taciturn (I can only think of one person I know who’s been sent a vision from God), so you also might not, and God might speak to one person about it but not everyone, leaving others to wonder if they can trust people’s claims, or to study the Bible and other relevant information to try to figure it out for themselves. And then there are people who just get stuff wrong and won’t listen, but insist they’re right, and insist God agrees with them, confusing anyone God hasn’t spoken to. Hence, if you receive an answer and listen (something that’s happened to me, but not nearly every time I ask a question—at least, not unless we count finding the answer after asking through running into it in a book or something), you’ll know, but there’s also a possibility of just not finding out.
*There’s a joke I can’t find about some Talmudic scholars who are arguing. They ask God, a voice booms out from the heavens which one is right, and the others fail to update.
I had to confront that one. Upvoted for being an objection a reasonable person should make.
Be familiar with how mental illnesses and other disorders that can affect thinking actually present. (Not just the DSM. Read what people with those conditions say about them.)
Be familiar with what messages from God are supposed to be like. (From Old Testament examples or Paul’s heuristic. I suppose it’s also reasonable to ascertain whether or not they fit the pattern for some other religion.)
Essentially, look at what your experiences best fit. That can be hard. But if your “visions” are highly disturbing and you become paranoid about your neighbors trying to kill you, it’s more likely schizophrenia than divine inspiration. This applies to other things as well.
Does it actually make sense? Is it a message saying something, and then another one of the same sort, proclaiming the opposite, so that to believe one requires disbelieving the other?
Is there anything you can do to increase the probability that you’re mentally healthy? Is your thyroid okay? How are your adrenals? Either could get sick in a way that mimics a mood disorder. Can you also consider whether your lifestyle’s not conducive to mental health? Sleep problems? Poor nutrition?
Run it by other people who know you well and would be people you would trust to know if you were mentally ill.
No certainties. Just ways to be a little more sure. And that leads into the next one.
Pick the most likely interpretation and go with it and see if your quality of life improves. See if you’re becoming a better person.
But if your “visions” are highly disturbing and you become paranoid about your neighbors trying to kill you, it’s more likely schizophrenia than divine inspiration.
“The angel of the Lord appeareth to Joseph in a dream, saying, Arise, and take the young child and his mother, and flee into Egypt, and be thou there until I bring thee word: for Herod will seek the young child to destroy him. When he arose, he took the young child and his mother by night, and departed into Egypt.”
Does it actually make sense?
I work in a psych hospital, and the delusional patients there uniformly believe that their delusions make sense.
Run it by other people
This is the most likely to work. The delusional people I know are aware that other people disagree with their delusions. Relatedly, there is great disagreement on the topic of religion.
“The angel of the Lord appeareth to Joseph in a dream, saying, Arise, and take the young child and his mother, and flee into Egypt, and be thou there until I bring thee word: for Herod will seek the young child to destroy him. When he arose, he took the young child and his mother by night, and departed into Egypt.”
Good point. Of course, this one does make a testable prediction, and as opposed to what might be more characteristic of a mental illness, the angel tells him there’s trouble, he avoids it and we have no further evidence of his getting any more such messages. That at least makes schizophrenia a much less likely explanation than just having a weird dream, so that’s what to try ruling out.
Be familiar with what messages from God are supposed to be like. (From Old Testament examples or Paul’s heuristic. I suppose it’s also reasonable to ascertain whether or not they fit the pattern for some other religion.)
I have to admit that I’m not familiar with Paul’s heuristic—what is it ?
As for the Old Testament, God gives out some pretty frightening messages in there, from “sacrifice your son to me” to “wipe out every man, woman, and child who lives in this general area”. I am reasonably sure you wouldn’t listen to a message like that, but why wouldn’t you ?
Pick the most likely interpretation and go with it and see if your quality of life improves. See if you’re becoming a better person.
I have heard this sentiment from other theists, but I still understand it rather poorly, I’m ashamed to admit… maybe it’s because I’ve never been religious, and thus I’m missing some context.
So, what do you mean by “a better person”; how do you judge what is “better” ? In addition, let’s imagine that you discovered that believing in, say, Buddhism made you an even better person. Would you listen to messages that appear to be Buddhist, and discard those that appear to be Christian but contradict Buddhism—even though you’re pretty sure that Christianity is right and Buddhism is wrong ?
I think I might be too tired to give this the response it deserves. If this post isn’t a good enough answer, ask me again in the morning.
I have to admit that I’m not familiar with Paul’s heuristic—what is it ?
That you can tell whether a spirit is good or evil by whether or not it says Jesus is Lord.
I have heard this sentiment from other theists, but I still understand it rather poorly, I’m ashamed to admit… maybe it’s because I’ve never been religious, and thus I’m missing some context.
Well, right here I mean that if you’ve narrowed it down to either schizophrenia or Christianity is true and God is speaking to you, if it’s the former, untreated, you expect to feel more miserable. If it’s the latter, by embracing God, you expect it’ll make your quality of life improve. “Better person” here means “person who maximizes average utility better”.
That you can tell whether a spirit is good or evil by whether or not it says Jesus is Lord.
Oh, I see, and the idea here is that the evil spirit would not be able to actually say “Jesus is Lord” without self-destructing, right ? Thanks, I get it now; but wouldn’t this heuristic merely help you to determine whether the message is coming from a good spirit or an evil one, not whether the message is coming from a spirit or from inside your own head ?
if it’s the former, untreated, you expect to feel more miserable.
I haven’t studied schizophrenia in any detail, but wouldn’t a person suffering from it also have a skewed subjective perception of what “being miserable” is ?
If it’s the latter, by embracing God, you expect it’ll make your quality of life improve.
Some atheists claim that their life was greatly improved after their deconversion from Christianity, and some former Christians report the same thing after converting to Islam. Does this mean that the Christian God didn’t really talk to them while they were religious, after all—or am I overanalyzing your last bullet point ?
“Better person” here means “person who maximizes average utility better”.
Understood, though I was confused for a moment there. When other people say “better person”, they usually mean something like “a person who is more helpful and kinder to others”, not merely “a happier person”, though obviously those categories do overlap.
I just lost my comment by hitting the wrong button. Not being too tired today, though, here’s what I think in new words:
Oh, I see, and the idea here is that the evil spirit would not be able to actually say “Jesus is Lord” without self-destructing, right ? Thanks, I get it now; but wouldn’t this heuristic merely help you to determine whether the message is coming from a good spirit or an evil one, not whether the message is coming from a spirit or from inside your own head ?
Yes. That’s why we have to look into all sorts of possibilities.
I haven’t studied schizophrenia in any detail, but wouldn’t a person suffering from it also have a skewed subjective perception of what “being miserable” is ?
Speaking here only as a layperson who’s done a lot of research, I can’t think of any indication of that. Rather, they tend to be pretty miserable if their psychosis is out of control (with occasional exceptions). One person’s biography that I read recounts having it mistaken for depression at first, and believing that herself since it fit. That said, conventional approaches to treating schizophrenia don’t help much/any with half of it, the half that most impairs quality of life. (Not that psychosis doesn’t, but as a quick explanation, they also suffer from the “negative symptoms” which include stuff like apathy, poor grooming and stuff. The “positive symptoms” are stuff like hearing voices and being delusional. In the rare* cases where medication works, it only treats positive symptoms and usually exacerbates negative symptoms. (Just run down a list of side-effects and a list of negative symptoms. It helps if you know jargon.) Hence, poor quality of life.) So it’s also possible that receiving treatment for a mental illness you actually have would fail to increase quality of life. Add in abuses by the system and it could even decrease it, so this is definitely a problem.
Understood, though I was confused for a moment there. When other people say “better person”, they usually mean something like “a person who is more helpful and kinder to others”, not merely “a happier person”, though obviously those categories do overlap.
Aris understood correctly.
*About a third of schizophrenics are helped by medication. Not rare, certainly, but that’s less than half. Guidelines for treating schizophrenia are irrational. I will elaborate if asked, with the caveat that it’s irrelevant and I’m not a doctor.
I generally expect that people who make an effort to be X will subsequently report that being X improves their life, whether we’re talking about “convert to Christianity” or “convert to Islam” or “deconvert from Christianity” or “deconvert from Islam.”
Interesting—the flip side is “the grass is always greener.” I am not at all surprised that other effects dominate sometimes, or even a good deal of the time, however.
People can identify as Christian while being confused about what that means.
Can you clarify? Is it your claim that these “confused” Christians are the only ones who experience improved lives upon deconversion? Or did you mean something else?
I’m saying people can believe that they are Christians, go to church, pray, believe in the existence of God and still be wrong about fundamental points of doctrine like “I require mercy, not sacrifice” or the two most important commands, leading to people who think being Christian means they should hate certain people. There are also people who conflate tradition and divine command, leading to groups that believe being Christian means following specific rules which are impractical in modern culture and not beneficial. I expect anyone like that to have an improved quality of life after they stop hating people and doing pointless things. I expect a quality of life even better than that if they stop doing the bad stuff but really study the Bible and be good people, with the caveat that quality of life for those people could be lowered by persecution in some times and places. (They could also end up persecuted for rejecting it entirely in other times and places. Or even the same ones.)
Basically, yeah, only if they’ve done something wrong in their interpretation of Scripture will they like being atheists better than being Christians.
My brain is interpreting that as “well, TRUE Christians wouldn’t be happier/better if they deconverted.” How is this not “No True Scotsman”?
Would you say you are some variety of Calvinist? I’m guessing not, since you don’t sound quite emphatic enough on this point. (For the Calvinist, it’s point of doctrine that no one can cease being a Christian—they must not have been elect in the first place. I expect you already know this, I’m saying it for the benefit of any following the conversation who are lucky enough to not have heard of Calvinism. Also, lots of fundamentalist leaning groups (e.g., Baptists) have a “once saved always saved” doctrine.)
I hope I’m not coming off confrontational; I had someone IRL tell me I must never have been a real christian not too long ago, and I found it very annoying—so I may be being a bit overly sensitive.
In the rare* cases where medication works, it only treats positive symptoms and usually exacerbates negative symptoms. … So it’s also possible that receiving treatment for a mental illness you actually have would fail to increase quality of life.
Could you elaborate on this point a bit ? As far as I understand, at least some of the positive symptoms may pose significant existential risks to the patient (and possibly those around him, depending on severity). For example, a person may see a car coming straight at him, and desperately try to dodge it, when in reality there’s no car. Or a person may fail to notice a car that actually exists. Or, in extreme cases, the person may believe that his neighbour is trying to kill him, take preemptive action, and murder an innocent. If I had symptoms like that, I personally would rather live with the negatives for the rest of my life, rather than living with the vastly increased risk that I might accidentally kill myself or harm others—even knowing that I might feel subjectively happier until that happens.
Aris understood correctly.
Ok, that makes sense: by “becoming a better person”, you don’t just mean “a happier person”, but also “a person who’s more helpful and nicer to others”; and you choose to believe things that make you such a person.
I have to admit, this mode of thought is rather alien to me, and thus I have a tough time understanding it. To me, this sounds perilously close to wishful thinking. To use an exaggerated example, I would definitely feel happier if I knew that I had a million dollars in the bank. Having a million dollars would also empower me to be a better person, since I could donate at least some of it to charity, or invest it in a school, etc. However, I am not going to go ahead and believe that I have a million dollars, because… well… I don’t.
In addition, there’s a question of what one sees as being “better”. As we’d talked about earlier, at least some theists do honestly believe that persecuting gay people and forcing women to wear burqas is a good thing to do (and a moral imperative). Thus, they will (presumably) interpret any gut feelings that prompt them to enforce the burqa ordinances even harder as being good and therefore godly and true. You (and I), however, would do just the opposite. So, we both use the same method but arrive at diametrically opposed conclusions; doesn’t this mean that the method may be flawed ?
Short version: unsurprising because of things like this. People can identify as Christian while being confused about what that means.
My main objection to this line of reasoning is that it involves the “No True Scotsman” fallacy. Who is to say (other than the Pope, perhaps) what being a Christian “really means” ? The more conservative Christians believe that feminism is a sin, whereas you do not; but how would you convince an impartial observer that you are right and they are wrong ? You could say, “clearly such attitudes harm women, and we shouldn’t be hurting people”, but they’d just retort with, “yes, and incarcerating criminals harms the criminals to, but it must be done for the greater good, because that’s what God wants; He told me so”.
In addition, it is not the case that all people who leave Christianity (be it for another religion, or for no religion at all) come from such extreme sects as the one you linked to. For example, Julia Sweeny (*), a prominent atheist, came from a relatively moderate background, IIRC. More on this below:
Surprising. My model takes a hit here. Do you have links to firsthand accounts of this?
I don’t have any specific links right now (I will try to find some later), but apparently there is a whole website dedicated to the subject. Wikipedia also has a list. I personally know at least two people who converted from relatively moderate versions of Christianity to Wicca and Neo-Paganism, and report being much happier as the result, though obviously this is just anecdotal information and not hard data. In general, though, my impression was that religious conversions are relatively common, though I haven’t done any hard research on the topic. There’s an interesting-looking paper on the topic that I don’t have access to… maybe someone else here does ?
(*) I just happened to remember her name off the top of my head, because her comedy routine is really funny.
Yeah. You could feel unhappy a lot more if you take the pills usually prescribed to schizophrenics because side-effects of those pills include mental fog and weight gain. You could also be a less helpful person to others because you would be less able to do thinks if you’re on a high enough dose to “zombify” you. Also, Erving Goffman’s work shows that situations where people are in an institution, as he defines the term, cause people to become stupider and less capable. (Kudos to the mental health system for trying to get people out of those places faster—most people who go in get out after a little while now, as opposed to the months it usually took when he was studying. However, the problems aren’t eliminated and his research is still applicable.) Hence, it could make you a worse and unhappier person to undergo treatment.
(and possibly those around him, depending on severity)
NO. That takes a BIG NO. Severity of mental illness is NOT correlated with violence. It’s correlated with self-harm, but not hurting other people.
Mental illness is correlated (no surprise here) with being abused and with substance abuse. Both of those are correlated with violence, leading to higher rates of violence among the mentally ill. Even when not corrected for, the rate isn’t that high and the mentally ill are more likely to be victims of violent crime than perpetrators of it. But when those effects ARE corrected for, mental illness does not, by itself, cause violence.
At all. End of story. Axe-crazy villains in the movies are unrealistic and offensive portrayals of mental illness. /rant
I have to admit, this mode of thought is rather alien to me, and thus I have a tough time understanding it. To me, this sounds perilously close to wishful thinking. To use an exaggerated example, I would definitely feel happier if I knew that I had a million dollars in the bank. Having a million dollars would also empower me to be a better person, since I could donate at least some of it to charity, or invest it in a school, etc. However, I am not going to go ahead and believe that I have a million dollars, because… well… I don’t.
This mode of thought is alien to me too, since I wasn’t advocating it. I’m confused about how you could come to that conclusion. I have been unclear, it seems.
(Seriously, what?)
Okay, so I mean, if you think you only want to fulfill your own selfish desires, and then become a Christian, and even though you don’t want to, decide it’s right to be nice to other people and spend time praying, and then after a while learn that it makes you really happy to be nice and happier than you’ve ever been before to pray. That’s what I meant.
In addition, there’s a question of what one sees as being “better”. As we’d talked about earlier, at least some theists do honestly believe that persecuting gay people and forcing women to wear burqas is a good thing to do (and a moral imperative). Thus, they will (presumably) interpret any gut feelings that prompt them to enforce the burqa ordinances even harder as being good and therefore godly and true. You (and I), however, would do just the opposite. So, we both use the same method but arrive at diametrically opposed conclusions; doesn’t this mean that the method may be flawed ?
Yes. It’s only to be used as an adjunct to thinking things through, not the end-all-be-all of your strategy for deciding what to do in life.
The more conservative Christians believe that feminism is a sin, whereas you do not; but how would you convince an impartial observer that you are right and they are wrong ?
My argument isn’t against people who think feminism is sinful (would you like links to sane, godly people espousing the idea without being hateful?) but with the general tenor of the piece. See below.
My main objection to this line of reasoning is that it involves the “No True Scotsman” fallacy. Who is to say (other than the Pope, perhaps) what being a Christian “really means” ?
Well, not the Pope, certainly. He’s a Catholic. But I thought a workable definition of “Christian” was “person who believes in the divinity of Jesus Christ and tries to follow his teachings”, in which case we have a pretty objective test. Jesus taught us to love our neighbors and be merciful. He repeatedly behaved politely toward women of poor morals, converting them with love and specifically avoiding condemnation. Hence, people who are hateful or condemn others are not following his teachings. If that was a mistake, that’s different, just like a rationalist could be overconfident—but to systematically do it and espouse the idea that you should be hateful clearly goes against what Jesus taught as recorded in the Bible. Here’s a quote from the link:
If I were a king, I’d make a law that any woman who wore a miniskirt would go to jail. I’m not kidding!
Compare it with a relevant quote from the Bible, which has been placed in different places in different versions, but the NIVUK (New International Version UK) puts it at the beginning of John 8:
The teachers of the law and the Pharisees brought in a woman caught in adultery. They made her stand before the group
4 and said to Jesus, Teacher, this woman was caught in the act of adultery.
5 In the Law Moses commanded us to stone such women. Now what do you say?
6 They were using this question as a trap, in order to have a basis for accusing him. But Jesus bent down and started to write on the ground with his finger.
7 When they kept on questioning him, he straightened up and said to them, If any one of you is without sin, let him be the first to throw a stone at her.
8 Again he stooped down and wrote on the ground.
9 At this, those who heard began to go away one at a time, the older ones first, until only Jesus was left, with the woman still standing there.
10 Jesus straightened up and asked her, Woman, where are they? Has no-one condemned you?
11 No-one, sir, she said. Then neither do I condemn you, Jesus declared. Go now and leave your life of sin.
So, it’s not unreasonable to conclude that, whether or not Christianity is correct and whether or not it’s right to lock people up for wearing miniskirts, that attitude is unChristian.
I don’t have any specific links right now (I will try to find some later), but apparently there is a whole website dedicated to the subject. Wikipedia also has a list. I personally know at least two people who converted from relatively moderate versions of Christianity to Wicca and Neo-Paganism, and report being much happier as the result, though obviously this is just anecdotal information and not hard data.
… I thought a workable definition of “Christian” was “person who believes in the divinity of Jesus Christ and tries to follow his teachings”, in which case we have a pretty objective test. Jesus taught us to love our neighbors and be merciful. He repeatedly behaved politely toward women of poor morals, converting them with love and specifically avoiding condemnation. Hence, people who are hateful or condemn others are not following his teachings. If that was a mistake, that’s different, just like a rationalist could be overconfident—but to systematically do it and espouse the idea that you should be hateful clearly goes against what Jesus taught as recorded in the Bible.
I seem to be collecting downvotes, so I’ll shut up about this shortly. But to me, anyway, this still sounds like No True Scotsman. I suspect that nearly all Christians will agree with your definition (excepting Mormons and JW’s, but I assume you added “divinity” in there to intentionally exclude them). However, I seriously doubt many of them will agree with your adjudication. Fundamentalists sincerely believe that the things they do are loving and following the teachings of Jesus. They think you are the one putting the emphasis on the wrong passages. I personally happen to think you probably are much more correct than they are; but the point is neither one of us gets to do the adjudication.
I think this is missing the point: they believe that, but they’re wrong. The fact that they’re wrong is what causes them distress. If you’d like, we can taboo the word “Christian” (or just end the conversation, as you suggest).
I wouldn’t go disagreeing with him; I’d try performing a double-blind test of his athletic ability while wearing different pairs of socks. It just seems like the sort of thing that’s so simple to design and test that I don’t know if I could resist. I’d need three people and a stopwatch...
I suspect that after the third or fifth such athlete, you’d develop the ability to resist, and simply have your opinion about his or her belief about socks, which you might or might not share depending on the circumstances.
Uh-oh, that’s a bad sign. If someone on LessWrong thinks something like that, I’d better give it credence. But now I’m confused because I can’t think what has given you that idea. Ergo, there appears to be evidence that I’ve not only made a mistake in thinking, but made one unknowingly, and failed to realize afterward or even see that something was wrong.
So, this gives me two questions and I feel like an idiot for asking them, and if this site had heretofore been behaving like other internet sites this would be the point where the name-calling would start, but you guys seem more willing than average to help people straighten things out when they’re confused, so I’m actually going to bother asking:
What do you mean by “basic premise” and “can’t question” in this context? Do you mean that I can’t consider his nonexistence as a counterfactual? Or is there a logical impossibility in my conception of God that I’ve failed to notice?
Can I have specific quotes, or at least a general description, of when I’ve been evasive? Since I’m unaware of it, it’s probably a really bad thinking mistake, not actual evasiveness—that or I have a very inaccurate self-concept.
Actually, no possibility seems good here (in the sense that I should revise my estimate of my own intelligence and/or honesty and/or self-awareness down in almost every case), except that something I said yesterday while in need of more sleep came out really wrong. Or that someone else made a mistake, but given that I’ve gotten several downvotes (over seventeen, I think) in the last couple of hours, that’s either the work of someone determined to downvote everything I say or evidence that multiple people think I’m being stupid.
(You know, I do want to point out that the comment about testing his lucky socks was mostly a joke. I do assign a really low prior probability to the existence of lucky socks anywhere, in case someone voted me down for being an idiot instead of for missing the point and derailing the analogy. But testing it really is what I would do in real life if given the chance.)
This isn’t a general objection to my religion, is it? (I’m guessing no, but I want to make sure.)
There is a man in the sky who created everything and loves all of us, even the 12-year-old girl getting gang-raped to death right now. His seeming contradictions are part of a grander plan that we cannot fathom.
Not how I would have put that, but mostly ADBOC this. (I wouldn’t have called him a man, nor would I have singled out the sky as a place to put him. But yes, I do believe in a god who created everything and loves all, and ADBOC the bit about the 12-year-old—would you like to get into the Problem of Evil or just agree to disagree on the implied point even though that’s a Bayesian abomination? And agree with the last sentence.)
Can’t, won’t, unwilling to. Yes, it’s possible for you to question it, but you aren’t doing so.
I’d ask you what would look different if I did, but I think you’ve answered this below.
Sure you can. How is a universe not set in motion by God notably different from one that is?
You think I’m one of those people. Let me begin by saying that God’s existence is an empirical fact which one could either prove or disprove.
I worry about telling people why I converted because I fear ridicule or accusations of lying. However, I’ll tell you this much: I suddenly became capable of feeling two new sensations, neither of which I’d felt before and neither of which, so far as I know, has words in English to describe it. Sensation A felt like there was something on my skin, like dirt or mud, and something squeezing my heart, and was sometimes accompanied by a strange scent and almost always by feelings of distress. Sensation B never co-occurred with Sensation A. I could be feeling one, the other or neither, and could feel them to varying degrees. Sensation B felt relaxing, but also very happy and content and jubilant in a way and to a degree I’d never quite been before, and a little like there was a spring of water inside me, and like the water was gold-colored, and like this was all I really wanted forever, and a bit like love. After becoming able to feel these sensations, I felt them in certain situations and not in others. If one assumed that Sensation A was Bad and Sensation B was Good, then they were consistent with Christianity being true. Sometimes they didn’t surprise me. Sometimes they did—I could get the feeling that something was Bad even if I hadn’t thought so (had even been interested in doing it) and then later learn that Christian doctrine considered it Bad as well.
I do not think a universe without God would look the same. I can’t see any reason why a universe without God would behave as if it had an innate morality that seems, possibly, somewhat arbitrary. I would expect a universe without God to work just like I thought it did when I was an atheist. I would expect there to be nothing wrong (no signal saying Bad) with… well, anything, really. A universe without God has no innate morality. The only thing that could make morality would be human preference, which changes an awful lot. And I certainly wouldn’t expect to get a Good signal on the Bible but a Bad signal on other holy books.
So. That’s the better part of my evidence, such as it is.
If one assumed that Sensation A was Bad and Sensation B was Good, then they were consistent with Christianity being true. Sometimes they didn’t surprise me. Sometimes they did—I could get the feeling that something was Bad even if I hadn’t thought so (had even been interested in doing it) and then later learn that Christian doctrine considered it Bad as well.
This would be considerably more convincing if Christianity were a unified movement.
Suppose there existed only three religions in the world, all of which had a unified dogma and only one interpretation of it. Each of them had a long list of pretty specific doctrinal points, like one religion considering Tarot cards bad and another thinking that they were fine. If your Good and Bad sensations happened to precisely correspond to the recommendations of one particular religion, even in the cases where you didn’t actually know what the recommendations were beforehand, then that would be some evidence for the religion being true.
However, in practice there are a lot of religions, and a lot of different Christian sects and interpretations. You’ve said that you’ve chosen certain interpretations instead of others because that’s the interpretation that your sensations favored. Consider now that even if your sensations were just a quirk of your brain and mostly random, there are just so many different Christian sects and varying interpretations that it would be hard not to find some sect or interpretation of Christian doctrine who happened to prescribe the same things as your sensations do.
Then you need to additionally take into account ordinary cognitive flaws like confirmation bias: once you begin to believe in the hypothesis that your sensations reflect Christianity’s teachings, you’re likely to take relatively neutral passages and read into them doctrinal support for your position, and ignore passages which say contrary things.
In fact, if I’ve read you correctly, you’ve explicitly said that you choose the correct interpretation of Biblical passages based on your sensations, and the Biblical passages which are correct are the ones that give you a Good feeling. But you can’t then say that Christianity is true because it’s the Christian bits that give you the good feeling—you’ve defined “Christian doctrine” as “the bits that give a good feeling”, so “the bits that give a good feeling” can’t not be “Christian doctrine”!
Furthermore, our subconscious models are often accurate but badly understood by our conscious minds. For many skills, we’re able to say what’s the right or wrong way of doing something, but be completely unable to verbalize the reason. Likewise, you probably have a better subconscious model of what would be “typical” Christian dogma than you are consciously aware of. It is not implausible that you’d have a subconscious process making guesses on what would be a typical Christian response to something, giving you good or bad sensation based on that, and often guessing right (especially since, as noted before, there’s quite a lot of leeway in how a “Christian response” is defined).
For instance, you say that you hadn’t thought of Tarot cards being Bad before. But the traditional image of Christianity is that of being strongly opposed to witchcraft, and Tarot cards are used for divination, which is strongly related to witchcraft. Even if you hadn’t consciously made that connection, it’s obvious enough that your subconscious very well could have.
I don’t think the conclusion that the morality described by sensations A/B is a property of the universe at large has been justified. You mention that the sensations predict in advance what Christian doctrine describes as moral or immoral before you know directly what that doctrine says, but that strikes me as being an investigation method that is not useful, for two reasons:
Christian culture is is very heavily permeated throughout most English-speaking cultures. A person who grows up in such a culture will have a high likelihood of correctly guessing Christianity’s opinion on any given moral question, even if they haven’t personally read the relevant text.
More generally, introspection is a very problematic way of gathering data. Many many biases, both obvious and subtle, come into play, and make your job way more difficult. For example: Did you take notes on each instance of feeling A or B when it occurred, and use those notes (and only those notes) later when validating them against Christian doctrine? If not, you are much more likely to remember hits than misses, or even to after-the-fact readjust misses into hits; human memory is notorious for such things.
A universe without God has no innate morality. The only thing that could make morality would be human preference, which changes an awful lot.
In a world entirely without morality, we are constantly facing situations where trusting another person would be mutually beneficial, but trusting when the other person betrays is much worse than mutual betrayal. Decision theory has a name for this type of problem: Prisoner’s Dilemma. The rational strategy is to defect, which makes a pretty terrible world.
But when playing an indefinite number of games, it turns out that cooperating, then punishing defection is a strong strategy in an environment of many distinct strategies. That looks a lot like “turn the other cheek” combined with a little bit of “eye for an eye.” Doesn’t the real world behavior consistent with that strategy vaguely resemble morality?
In short, decision theory suggests that material considerations can justify a substantial amount of “moral” behavior.
Regarding your sensations A and B, from the outside perspective it seems like you’ve been awfully lucky that your sense of right and wrong match your religious commitments. If you believed Westboro Baptist doctrine but still felt sensations A and B at the same times you feel them now, then you’d being doing sensation A behavior substantially more frequently. In other words, I could posit that you have a built-in morality oracle, but why should I believe that the oracle should be labelled Christian? If I had the same moral sensations you do, why shouldn’t I call it rationalist morality?
If you believed Westboro Baptist doctrine but still felt sensations A and B at the same times you feel them now,
...I became a Christian and determined my religious beliefs based on sensations A and B. Why would I believe in unsupported doctrine that went against what I could determine of the world? I just can’t see myself doing that. My sense of right and wrong match my religious commitments because I chose my religious commitments so they would fit with my sense of right and wrong.
but why should I believe that the oracle should be labelled Christian?
Because my built-in morality oracle likes the Christian Bible.
Doesn’t the real world behavior consistent with that strategy vaguely resemble morality?
It’s sufficient to explain some, but not all, morality. Take tarot cards, for example. What was there in the ancestral environment to make those harmful? That just doesn’t make any sense with your theory of morality-as-iterated-Prisoner’s-Dilemma.
If you picked a sect based on your moral beliefs, then that is evidence that your Christianity is moral. It is not evidence that morality is your Christianity (i.e. “A implies B” is not equivalent “B implies A”).
And if playing with tarot cards could open a doorway for demons to enter the world (or whatever wrong they cause), it seems perfectly rational to morally condemn tarot cards. I don’t morally condemn tarot cards because I think they have the same mystical powers as regular playing cards (i.e. none). Also, I’m not intending to invoke “ancestral environment” when I invoke decision theory.
And if playing with tarot cards could open a doorway for demons to enter the world (or whatever wrong they cause), it seems perfectly rational to morally condemn tarot cards.
But that’s already conditional on a universe that looks different from what most atheists would say exists. If you see proof that tarot cards—or anything else—summon demons, your model of reality takes a hit.
If you picked a sect based on your moral beliefs, then that is evidence that your Christianity is moral. It is not evidence that morality is your Christianity (i.e. “A implies B” is not equivalent “B implies A”).
If tarot cards have mystical powers, I absolutely need to adjust my beliefs about the supernatural. But you seemed to assert that decision theory can’t say that tarot are immoral in the universes where they are actually dangerous.
If you picked a sect based on your moral beliefs, then that is evidence that your Christianity is moral. It is not evidence that morality is your Christianity (i.e. “A implies B” is not equivalent “B implies A”).
I don’t understand. Can you clarify?
Alice has a moral belief that divorce is immoral. This moral belief is supported by objective evidence. She is given a choice to live in Distopia, where divorce is permissible by law, and Utopia, where divorce is legally impossible. For the most part, Distopia and Utopia are very similar places to live. Predictably, Alice chooses to live in Utopia. The consistency between Alice’s (objectively true) morality and Utopian law is evidence that Utopia is moral. It is not evidence that Utopia is the cause of Alice’s morality (i.e. is not evidence that morality is Utopian—the grammatical ordering of phrases does not help making my point).
But you seemed to assert that decision theory can’t say that tarot are immoral in the universes where they are actually dangerous.
Oh, I’m sorry. Yes, that does make sense. Decision theory WOULD assert it, but to believe they’re immoral requires belief in some amount of supernatural something, right? Hence it makes no sense under what my prior assumptions were (namely, that there was nothing supernatural).
Alice has a moral belief that divorce is immoral. This moral belief is supported by objective evidence. She is given a choice to live in Distopia, where divorce is permissible by law, and Utopia, where divorce is legally impossible. For the most part, Distopia and Utopia are very similar places to live. Predictably, Alice chooses to live in Utopia. The consistency between Alice’s (objectively true) morality and Utopian law is evidence that Utopia is moral. It is not evidence that Utopia is the cause of Alice’s morality (i.e. is not evidence that morality is Utopian—the grammatical ordering of phrases does not help making my point).
Oh, I’m sorry. Yes, that does make sense. Decision theory WOULD assert it, but to believe they’re immoral requires belief in some amount of supernatural something, right? Hence it makes no sense under what my prior assumptions were (namely, that there was nothing supernatural).
Accepting the existence of the demon portal should not impact your disbelief in a supernatural morality.
Anyways, the demons don’t even have to be supernatural. First hypothesis would be hallucination, second would be aliens.
I don’t see that decision theory cares why an activity is dangerous. Decision theory seems quite capable of imposing disincentives for poisoning (chemical danger) and cursing (supernatural danger) in proportion to their dangerousness and without regard to why they are dangerous.
The whole reason I’m invoking decision theory is to suggest that supernatural morality is not necessary to explain a substantial amount of human “moral” behavior.
You were not entirely clear, but you seem to be taking these as signals of things being Bad or Good in the morality sense, right? Ok so it feels like there is an objective morality. Let’s come up with hypotheses:
You have a morality that is the thousand shards of desire left over by an alien god. Things that were a good idea (for game theory, etc reasons) to avoid in the ancestral environment tend to feel good so that you would do them. Things that feel bad are things you would have wanted to avoid. As we know, an objective morality is what a personal morality feels like from the inside. That is, you are feeling the totally natural feelings of morality that we all feel. Why you attached special affect to the bible, I suppose that’s the affect hueristic: you feel like the bible is true and it is the center of your belief or something, and that goodness gets confused with a moral goodness. This is all hindsight, but it seems pretty sound.
Or it could be Jesus-is-Son-of-a-Benevolent-Love-Agent-That-Created-the-Universe. I guess God is sending you signals to say what sort of things he likes/doesn’t like? Is that the proposed mechanism for morality? I don’t know enough about the theory to say much more.
Ok now let’s consider the prior. The complex loving god hypothesis is incredibly complicated. Minds are so complex we can’t even build one yet. It would take a hell of a lot more than your feeling-of-morality evidence to even raise this to our attention. A lot more than any scientific hypothesis has ever collected, I would say. You must have other evidence, not only to overcome the prior, but all the evidence against a loving god who intelligently arranged anything,
Anyways, It sounds like you were primarily a moral nihilist before your encounter with the god-prescribes-a-morality hypothesis. Have you read Eliezers metaethics stuff? it deals the with subject of morality in a neutral universe quite well.
I’m afraid I don’t see why you call your reward-signal-from-god is an “objective morality” It sounds like the best course of action would be to learn the mechanism and seize control of it like AIXI would.
I (as a human) already have a strong morality, so if I figured out that the agent responsible for all of the evil in the universe were directly attempting to steer me with a subtle reward signal, I’d be pissed. It’s interesting that you didn’t have that reaction. I guess that’s the moral nihilism thing. You didn’t know you had your own morality.
The complex loving god hypothesis is incredibly complicated. Minds are so complex we can’t even build one yet.
There are two problems with this argument. First, each individual god might be very improbable, but that could be counterbalanced by the astronomical number of possible gods (e.g. consider all possible tweaks to the holy book), so you can argue apriori against specific flavors of theism but not against theism in general. Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity. A powerful mind (or a program that blossoms into one) could even be simpler than physics as we currently know it, which is already quite complex and seems to have even more complexity waiting in store.
IMO a correct argument against theism should focus on the “loving” part rather than the “mind” part, and focus on evidence rather than complexity priors. The observed moral neutrality of physics is more probable if there’s no moral deity. Given what we know about evolution etc., it’s hard to name any true fact that makes a moral deity more likely.
I’m not sure that everything in my comment is correct. But I guess LW could benefit from developing an updated argument against (or for) theism?
Your argument about K-complexity is a decent shorthand but causes people to think that this “simplicity” thing is baked into the universe (universal prior) as if we had direct access to the universe (universal prior, reference machine language) and isn’t just another way of saying it’s more probable after having updated on a ton of evidence. As you said it should be about evidence not priors. No one’s ever seen a prior, at best a brain’s frequentist judgment about what “priors” are good to use when.
Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity.
That may be somewhat misleading. A seed AI, denied access to external information, will be a moron. Yet the more information it takes into memory the higher the K-complexity of the thing, taken as a whole, is.
You might be able to code a relatively simple AI in your garage, but if it’s going to be useful it can’t stay simple.
ETA: Also if you take the computer system as a whole with all of the programming libraries and hardware arrangements—even ‘hello world’ would have high K-complexity. If you’re talking about whatsoever produces a given output on the screen in terms of a probability mass I’m not sure it’s reasonable to separate the two out and deal with K-complexity as simply a manifestation of high level APIs.
For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is. Assuming a physics god is even a coherent concept.
In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
I bet physics is a lot simpler than it appears right now tho.
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
Bayesians don’t believe in evidence silly goose, you know that. Anyway, User:cousin_it, you’re essentially right, though I think that LW would benefit less from developing updated arguments and more from reading Aquinas, at least in the counterfactual universe where LW knew how to read. Anyway. In the real world Less Wrong is hopeless. You’re not hopeless. As a decision theorist you’re trying to find God, so you have to believe in him in a sense, right? And if you’re not trying to find God you should probably stay the hell away from FAI projects. Just sayin’.
A really intelligent response, so I upvoted you, even though, as I said, it surprised me by telling me that, just as one example, tarot cards are Bad when I had not even considered the possibility, so I doubt this came from inside me.
Well you are obviously not able to predict the output of your own brain, that’s the whole point of the brain. If morality is in the brain and still too complex to understand, you would expect to encounter moral feelings that you had not anticipated.
Er, I thought it was overall pretty lame, e.g. the whole question-begging w.r.t. the ‘prior probability of omnibenevolent omnipowerful thingy’ thingy (nothing annoys me more than abuses of probability theory these days, especially abuses of algorithmic probability theory). Perhaps you are conceding too much in order to appear reasonable. Jesus wasn’t very polite.
By the way, in case you’re not overly familiar with the heuristics and biases literature, let me give you a hint: it sucks. At least the results that most folk around her cite have basically nothing to do with rationality. There’s some quite good stuff with tons of citations, e.g. Gigerenzer’s, but Eliezer barely mentioned it to Less Wrong (as fastandfrugal.com which he endorsed) and therefore as expected Less Wrong doesn’t know about it. (Same with interpretations of quantum mechanics, as Mitchell Porter often points out. I really hope that Eliezer is pulling some elaborate prank on humanity. Maybe he’s doing it unwittingly.)
Anyway the upshot is that when people tell you about ‘confirmation bias’ as if it existed in the sense they think it does then they probably don’t know what the hell they’re talking about and you should ignore them. At the very least don’t believe them until you’ve investigated the literature yourself. I did so and was shocked at how downright anti-informative the field is, and less shocked but still shocked at how incredibly useless statistics is (both Bayesianism as a theoretical normative measure and frequentism as a practical toolset for knowledge acquisition). The opposite happened with the parapsychology literature, i.e. low prior, high posterior. Let’s just say that it clearly did not confirm my preconceptions; lolol.
Lastly, towards the esoteric end: All roads lead to Rome, if you’ll pardon a Catholicism. If they don’t it’s not because the world is mad qua mad; it is because it is, alas, sinful. An easy way to get to hell is to fall into a fully-general-counterargument blackhole, or a literal blackhole maybe. Those things freak me out.
(P.S. My totally obnoxious arrogance is mostly just a passive aggressive way of trolling LW. I’m not actually a total douchebag IRL. /recursive-compulsive-self-justification)
I love how Less Wrong basically thinks that all evidence that doesn’t support its favored conclusion is bad because it just leads to confirmation bias. “The evidence is on your side, granted, but I have a fully general counterargument called ‘confirmation bias’ that explains why it’s not actually evidence!” Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn’t actually exist. (Eliezer knew about the controversy, which is why his post is titled “Positive Bias”, which arguably also doesn’t exist, especially not in a cognitively relevant way.) Then they talk about Occam’s razor while completely failing to understand what algorithmic probability is actually saying. Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable! It’s like they’re trolling and I’m not supposed to feed them but they look sort of like a very hungry, incredibly stupid puppy.
Searching and skimming, the first link does not seem to actually say that confirmation bias does not exist. It says that it does not appear to be the cause of “overconfidence bias”—it seems to take no position on whether it exists otherwise.
Okay, yeah, I was taking a guess. There are other papers that talk about confirmation/positive bias specifically, a lot of in the vein of this kinda stuff. Maybe Kaj’s posts called ‘Heuristics and Biases Biases?’ from here on LW references some relevant papers too. Sorry, I have limited cognitive resources at the moment, I’m mostly trying to point in the general direction of the relevant literature because there’s quite a lot of it.
So I think you’re quite right in that “supernatural” and “natural” are sets that contain possible universes of very different complexity and that those two adjectives are not obviously relevant to the complexity of the universes they describe. I support tabooing those terms. But if you compare two universes, one of which is described most simply by the wave function and an initial state, and another which is described by the wave function, an initial state and another section of code describing the psychic powers of certain agents the latter universe is a priori more unlikely (bracketing for the moment the simulation issue), Obviously if psi phenomenon can be incorporated into the physical model without adding additional lines of code that’s another matter entirely.
Returning to the simulation issue I take your position to be that there are conceivable “meta-physics” (meant literally; not necessarily referring to the branch of philosophy) which can make local complexities more common? Is that a fair restatement? I have a suspicion that this is not possibly without paying the complexity back at the other end, though I’m not sure.
Anyway the upshot is that when people tell you about ‘confirmation bias’ as if it existed in the sense they think it does then they probably don’t know what the hell they’re talking about and you should ignore them.
...
I love how Less Wrong basically thinks that all evidence that doesn’t support its favored conclusion is bad because it just leads to confirmation bias. “The evidence is on your side, granted, but I have a fully general counterargument called ‘confirmation bias’ that explains why it’s not actually evidence!” Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn’t actually exist.
What was said that’s a synonym for or otherwise invoked the confirmation bias?
It’s mentioned a few times in this thread re AspiringKnitter’s evidence for Christianity. I’m too lazy to link to them, especially as it’d be so easy to get the answer to your question with control+f “confirmation” that I’m not sure I interpreted it correctly?
Just to echo the others that brought this up, I applaud your courage; few people have the guts to jump into the lions’ den, as it were. That said, I’m going to play the part of the lion (*) on this topic.
I suddenly became capable of feeling two new sensations, neither of which I’d felt before and neither of which, so far as I know, has words in English to describe it.
How do you know that these sensations come from a supernatural entity, and not from your own brain ? I know that if I started experiencing odd physical sensations, no matter how pleasant, this would be my first hypothesis (especially since, in my personal case, the risk of stroke is higher than average). In fact, if I experienced anything that radically contradicted my understanding of the world, I’d probably consider the following explanations, in order of decreasing likelihood:
I am experiencing some well-known cognitive bias.
My brain is functioning abnormally and thus I am experiencing hallucinations.
Someone is playing a prank on me.
Shadowy human agencies are testing a new chemical/biological/emissive device on me.
A powerful (yet entirely material) alien is inducing these sensations, for some reason.
A trickster spirit (such as a Kami, or the Coyote, etc.) is doing the same by supernatural means.
A localized god is to blame (Athena, Kali, the Earth Mother, etc.)
An omniscient, omnipotent, and generally all-everything entity is responsible.
This list is not exhaustive, obviously, it’s just some stuff I came up with off the top of my head. Each next bullet point is less probable than the one before it, and thus I’d have to reject pretty much every other explanation before arriving at “the Christian God exists”.
Is either of those well-known? What about the pattern with which they’re felt? Sound like anything you know? Me neither.
My brain is functioning abnormally and thus I am experiencing hallucinations.
That don’t have any other effect? That remain stable for years? With no other sign of mental illness? Besides, if I set out by assuming that I can’t tell anything because I’m crazy anyway, what good does that do me? It doesn’t tell me what to predict. It doesn’t tell me what to do. All it tells me is “expect nothing and believe nothing”. If I assume it’s just these hallucinations and everything else is normal, then I run into “my brain is functioning abnormally and I am experiencing hallucinations that tell me Christian doctrine is true even when I don’t know the doctrine in question”, which is the original problem you’re trying to explain.
A trickster spirit (such as a Kami, or the Coyote, etc.) is doing the same by supernatural means.
And instead of messing with me like a real trickster, it convinces me to worship something other than it and in so doing increases my quality of life?
However, there’s a reason I put “cognitive bias” as the first item on my list: I believe that it is overwhelmingly more likely than any alternatives. Thus, it would take a significant amount of evidence to convince me that I’m not laboring under such a bias, even if the bias does not yet have a catchy name.
That don’t have any other effect? That remain stable for years? With no other sign of mental illness?
AFAIK some brain cancers can present this way. In any case, if I started experiencing unusual physical symptoms all of a sudden, I’d consult a medical professional. Then I’d write down the results of his tests, and consult a different medical professional, just in case. Better safe than sorry.
And instead of messing with me like a real trickster, it convinces me to worship something other than it and in so doing increases my quality of life?
Trickster spirits (especially Tanuki or Kitsune) rarely demand worship; messing with people is enough for them. Some such spirits are more or less benign; the Tanuki and Raven both would probably be on board with the idea of tricking a human into improving his or her life.
That said, you skipped over human agents and aliens, both of which are IMO overwhelmingly more likely to exist than spirits (though that doesn’t make them likely to exist in absolute terms).
Well, as best I can tell my maintainer didn’t install the religion patch, so all I’m working with is the testaments of others; but I have seen quite a variety of such testaments. Buddhism and Hinduism have a typology of religious experience much more complex than anything I’ve seen systematically laid down in mainline Christianity; it’s usually expressed in terms unique to the Dharmic religions, but vipassanā for example certainly seems to qualify as an experiential pointer to Buddhist ontology.
If you’d prefer Western traditions, a phrase I’ve heard kicked around in the neopagan, reconstructionist, and ceremonial magic communities is “unsubstantiated personal gnosis”. While that’s a rather flippant way of putting it, it also seems to point to something similar to your experiences.
Careful, you may end up like Draco in HPMoR chapter 23, without a way to gom jabbar the guilty parties (sorry about the formatting):
“You should have warned me,” Draco said. His voice rose. “You should have warned me!”
“I… I did… every time I told you about the power, I told you about the price. I said, you have to admit you’re wrong. I said this would be the hardest path for you. That this was the sacrifice anyone had to make to become a scientist. I said, what if the experiment says one thing and your family and friends say another—”
“You call that a warning?” Draco was screaming now. “You call that a warning? When we’re doing a ritual that calls for a permanent sacrifice?”
“I… I...” The boy on the floor swallowed. “I guess maybe it wasn’t clear. I’m sorry. But that which can be destroyed by the truth should be.”
Nah, false beliefs are worthless. That which is true is already so; owning up to it doesn’t make it worse. If I turned out to actually be wrong—well, I have experience being wrong about religion. I’d probably react just like I did before.
It sounded like she was already coming down on the side of the good being good because it is commanded by God when she said, “an innate morality that seems, possibly, somewhat arbitrary.”
So maybe the dilemma is not such a problem for her.
I can understand your hesitation about telling that story. Thanks for sharing it.
Some questions, if you feel like answering them:
Can you give me some examples of things you hadn’t known Christian doctrine considered Bad before you sensed them as A?
If you were advising someone who lacks the ability to sense Good and Bad directly on how to have accurate beliefs about what’s Good and Bad, what advice would you give? (It seems to follow from what you’ve said elsewhere that simply telling them to believe Christianity isn’t sufficient, since lots of people sincerely believe they are following the directive to “believe Christianity” and yet end up believing Bad things. It seems something similar applies to “believe the New Testament”. Or does it?)
If you woke up tomorrow and you experienced sensation A in situations that were consistent with Christianity being true, and experienced sensation B in situations that were consistent with Islam being true, what would you conclude about the world based on those experiences?
** EDIT: My original comment got A and B reversed. Fixed.
I think that should probably be AspiringKnitter’s call. (I don’t think you’re pushing too hard, given the general norms of this community, but I’m not sure of what our norms concerning religious discussions are.)
Let’s try that! I got a Bad signal on the Koran and a website explaining the precepts of Wicca, but I knew what both of those were. I would be up for trying a test where you give me quotes from the Christian Bible (warning: I might recognize them; if so, I’ll tell you, but for what it’s worth I’ve only read part of Ezekiel, but might recognize the story anyway… I’ve read a lot of the Bible, actually), other holy books and neutral sources like novels (though I might have read those, too; I’ll tell you if I recognize them), without telling me where they’re from. If it’s too difficult to find Biblical quotes, other Christian writings might serve, as could similar writings from other religions. I should declare up front that I know next to nothing about Hinduism but once got a weak Good reading from what someone said about it. Also, I would prefer longer quotes; the feelings build up from unnoticeable, rather than hitting full-force instantly. If they could be at least as long as a chapter of the Bible, that would be good.
That is, if you’re actually proposing that we test this. If you didn’t really want to, sorry. It just seems cool.
The preparatory prayer is made according to custom.
The first prelude will be a certain historical consideration of ___ on the one part, and __ on the other, each of whom is calling all men to him, to be gathered together under his standard.
The second is, for the construction of the place, that there be represented to us a most extensive plain around Jerusalem, in which ___ stands as the Chief-General of all good people. Again, another plain in the country of Babylon, where ___ presents himself as the captain of the wicked and [God’s] enemies.
The third, for asking grace, will be this, that we ask to explore and see through the deceits- of the evil captain, invoking at the same time the Divine help in order to avoid them ; and to know, and by grace be able to imitate, the sincere ways of the true and most excellent General, ___ .
The first point is, to imagine before my eyes, in the Babylonian plain, the captain of the wicked, sitting in a chair of fire and smoke, horrible in figure, and terrible in countenance.
The second, to consider how, having as sembled a countless number of demons, he disperses them through the whole world in order to do mischief; no cities or places, no kinds of persons, being left free.
The third, to consider what kind of address he makes to his servants, whom he stirs up to seize, and secure in snares and chains, and so draw men (as commonly happens) to the desire of riches, whence afterwards they may the more easily be forced down into the ambition of worldly honour, and thence into the abyss of pride.
Thus, then, there are three chief degrees of temptation, founded in riches, honours, and pride; from which three to all other kinds of vices the downward course is headlong.
If I had more of the quote, it would be easier. I get a weak Bad feeling, but while the textual cues suggest it probably comes from either the Talmud or the Koran, and while I think it is, I’m not getting a strong feeling on this quote, so this makes me worry that I could be confused by my guess as to where it comes from.
But I’m going to stick my neck out anyway; I feel like it’s Bad.
If I had more of the quote, it would be easier. I get a weak Bad feeling, but while the textual cues suggest it probably comes from either the Talmud or the Koran, and while I think it is, I’m not getting a strong feeling on this quote, so this makes me worry that I could be confused by my guess as to where it comes from. But I’m going to stick my neck out anyway; I feel like it’s Bad.
What do you think of this; it’s a little less obscure:
Your wickedness makes you as it were heavy as lead, and to tend downwards with great weight and pressure towards hell; and if [God] should let you go, you would immediately sink and swiftly descend and plunge into the bottomless gulf, and your healthy constitution, and your own care and prudence, and best contrivance, and all your righteousness, would have no more influence to uphold you and keep you out of hell, than a spider’s web would have to stop a falling rock. Were it not that so is the sovereign pleasure of [God], the earth would not bear you one moment; for you are a burden to it; the creation groans with you; the creature is made subject to the bondage of your corruption, not willingly; the sun don’t willingly shine upon you to give you light to serve sin and [the evil one]; the earth don’t willingly yield her increase to satisfy your lusts; nor is it willingly a stage for your wickedness to be acted upon; the air don’t willingly serve you for breath to maintain the flame of life in your vitals, while you spend your life in the service of [God]‘s enemies. [God]‘s creatures are good, and were made for men to serve [God] with, and don’t willingly subserve to any other purpose, and groan when they are abused to purposes so directly contrary to their nature and end. And the world would spew you out, were it not for the sovereign hand of him who hath subjected it in hope. There are the black clouds of [God]’s wrath now hanging directly over your heads, full of the dreadful storm, and big with thunder; and were it not for the restraining hand of [God] it would immediately burst forth upon you. The sovereign pleasure of [God] for the present stays his rough wind; otherwise it would come with fury, and your destruction would come like a whirlwind, and you would be like the chaff of the summer threshing floor.
I recognized it by the first sentence, but then I have read it several times. (For those of you that haven’t heard of it, it is probably the most famous American sermon, delivered in 1741.)
… the mysterious (tablet)…is surrounded by an innumerable company of angels; these angels are of all kinds, — some brilliant and flashing , down to . The light comes and goes on the tablet; and now it is steady...
And now there comes an Angel, to hide the tablet with his mighty wing. This Angel has all the colours mingled in his dress; his head is proud and beautiful; his headdress is of silver and red and blue and gold and black, like cascades of water, and in his left hand he has a pan-pipe of the seven holy metals, upon which he plays. I cannot tell you how wonderful the music is, but it is so wonderful that one only lives in one’s ears; one cannot see anything any more.
Now he stops playing and moves with his finger in the air. His finger leaves a trail of fire of every colour, so that the whole Aire is become like a web of mingled lights. But through it all drops dew.
(I can’t describe these things at all. Dew doesn’t represent what I mean in the least. For instance, these drops of dew are enormous globes, shining like the full moon, only perfectly transparent, as well as perfectly luminous.)
…
All this while the dewdrops have turned into cascades of gold finer than the eyelashes of a little child. And though the extent of the Aethyr is so enormous, one perceives each hair separately, as well as the whole thing at once. And now there is a mighty concourse of angels rushing toward me from every side, and they melt upon the surface of the egg in which I am standing __, so that the surface of the egg is all one dazzling blaze of liquid light.
Now I move up against the tablet, — I cannot tell you with what rapture. And all the names of __, that are not known even to the angels, clothe me about. All the seven senses are transmuted into one sense, and that sense is dissolved in itself …
You had a Bad feeling about two Christian quotes that mentioned Hell or demons/hellfire. You also got a Good feeling about a quote from Nietzsche that didn’t mention Hell. I don’t know the context of your reactions to the Tarot and Wicca, but obviously people have linked those both to Hell. (See also Horned God, “Devil” trump.) So I wanted to get your reaction to a passage with no mention of Hell from an indeterminate religion, in case that sufficed to make it seem Good.
The author designed a famous Tarot deck, and inspired a big chunk (at minimum) of Wicca.
I hadn’t considered that hypothesis. I’d upvote for the novel theory, but now that you’ve told me you’ll never be able to trust further reactions that could confirm or deny it, which seems like it’s worth a downvote, so not voting your post up or down. That said, I think this fails to explain having a Bad reaction to this page and the entire site it’s on, despite thinking before reading it that Wicca was foofy nonsense and completely not expecting to find evil of that magnitude (a really, really strong feeling—none of the quotes you guys have asked me about have been even a quarter that bad). It wasn’t slow, either; unlike most other things, it was almost immediately obvious. (The fact that this has applied to everything else I’ve ever read about Wicca since—at least, everything written by Wiccans about their own religion—could have to do with expectation, so I can see where you wouldn’t regard subsequent reactions as evidence… but the first one, at least, caught me totally off-guard.)
I know who Crowley is. (It was his tarot deck that someone gave me as a gift—and I was almost happy about it, because I’d actually been intending to research tarot because it seemed cool and I meant to use the information for a story I was writing. But then I felt like, you know, Bad, so I didn’t end up using it.) That’s why I was surprised not to have a bad feeling about his writings.
Man is a rope tied between beast and [superior man] - a rope over an abyss. A dangerous across, a dangerous on-the-way, a dangerous looking-back, a dangerous shuddering and stopping.
What is great in man is that he is a bridge and not a goal: what
is lovable in man is that he is an overture and a going under.
I love those that know not how to live except by going under, for
they are those who cross over.
I love the great despisers, because they are the great reverers,
and arrows of longing for the other shore.
I love those who do not first seek a reason beyond the stars for
going under and being sacrifices, but sacrifice themselves to the
earth, that the earth may some day become the [superior man’s].
I love him who lives to know, and wants to know so
that the [superior man] may live some day. Thus he wants to go under.
I love him who works and invents to build a house for the [superior man]
and to prepare earth, animal, and plant for him: for thus he wants to go under.
I love him who loves his virtue: for virtue is the will to go under, and an arrow of longing.
I love him who does not hold back one drop of spirit for himself, but wants to be entirely the spirit of his virtue: thus he strides over the bridge as spirit.
I love him who makes his virtue his addiction and catastrophe: for his virtue’s sake he wants to live on and to live no longer.
I love him who does not want to have too many virtues. One virtue is more virtue than two, because it is more of a noose on which his catastrophe may hang.
I love him whose soul squanders itself, who wants no thanks and returns none: for he always gives away, and does not want to preserve himself.
I love him who is abashed when the dice fall to make his fortune, and
who asks: “Am I a crooked gambler?” For he wants to perish.
I love him who casts golden words before his deed, and always does more than he promises: for he wants to go under.
I love him who justifies future and redeems past generations: for he wants to perish of the present.
I love him who chastens his God, because he loves his God: for he must perish of the wrath of his God.
I love him whose soul is deep even in being wounded, and who can perish of a small experience: thus he gladly goes over the bridge.
I love him whose soul is so overfull that he forgets himself, and all things are in him: thus all things spell his going under.
I love him who has a free spirit and a free heart: thus his head is only the entrails of his heart, but his heart causes him to go under.
I love all who are as heavy drops, falling one by one out of the dark cloud that hangs over men: they herald the advent of lightning, and, as heralds, they perish.
Behold, I am a herald of the lightning, and a heavy drop from the cloud: but this lightning is called [superior man].
I get a moderate Good reading (?!) and I’m confused to get it because the morality the person is espousing seems wrong. I’m guessing this comes from someone’s writings about their religion, possibly an Eastern religion?
I get a moderate Good reading (?!) and I’m confused to get it because the morality the person is espousing seems wrong. I’m guessing this comes from someone’s writings about their religion, possibly an Eastern religion?
Walter Kaufman (Nietzsche’s translator here) prefers overman as the best translation of ubermensch.
ETA: This is some interesting commentary on the work
I’m surprised. I’d heard Nietzsche was not a nice person, but had also heard good things about him… huh. I’ll have to read his work, now. I wonder if the library has some.
Niezsche’s sister was an anti-semite and a German nationalist. After Nietzsche’s death, she edited his works into something that became an intellectual foundation for Nazism. Thus, he got a terrible reputation in the English speaking world.
It’s tolerable clear from a reading of his unabridged works that Nietzsche would have hated Nazism. But he would not have identified himself as Christian (at least as measured by a typical American today). He went mad before he died, and the apocryphal tale is that the last thing he did before being institutionalize was to see a horse being beaten on the street and moving to protect it.
To see his moral thought, you could read Thus Spake Zarathustra. To see why he isn’t exactly Christian, you can look at The Geneology of Morals. Actually, you might also like Kierkegaard because he expresses somewhat similar thoughts, but within a Christian framework.
To really see why he isn’t Christian, read The Antichrist.
The Christian conception of God—God as god of the sick, God as a spider, God as spirit—is one of the most corrupt conceptions of the divine ever attained on earth… God as the declaration of war against life, against nature, against the will to live! God—the formula for every slander against “this world,” for every lie about the “beyond”! God—the deification of nothingness, the will to nothingness pronounced holy!
As with what he wrote in Genealogy of Morals, it is unclear how tongue-in-cheek/intentional provocative Nietzsche is being. I’m honestly not sure whether Nietzsche thought the “master morality” was better or worse than the “slave morality.”
The sense I get—but note that it’s been a couple of years since I’ve read any substantial amount of Nietzsche—is that he treats master morality as more honest, and perhaps what we could call psychologically healthier, than slave morality, but does not advocate that the former be adopted over the latter by people living now; the transition between the two is usually explained in terms of historical changes. The morality embodied by his superior man is neither, or a synthesis of the two, and while he says a good deal about what it’s not I don’t have a clear picture of many positive traits attached to it.
The morality embodied by his superior man is neither, or a synthesis of the two, and while he says a good deal about what it’s not I don’t have a clear picture of many positive traits attached to it.
That’s because the superman, by definition, invents his own morality. If you read a book telling you the positive content of morality and implement it because the eminent philosopher says so, you ain’t superman.
I wouldn’t call him a fully sane person, especially in his later work (he suffered in later life from mental problems most often attributed to neurosyphilis, and it shows), but he has a much worse reputation than I think he really deserves. I’d recommend Genealogy of Morals and The Gay Science; they’re both laid out a bit more clearly than the works he’s most famous for, which tend to be heavily aphoristic and a little scattershot.
It’s easy to find an equally forceful bit by Nietzsche that’s not been quoted to death, really. Had AK recognized it, you would’ve botched a perfectly good test.
Fairly read as a whole and in the context of the trial, the instructions required the jury to find that Chiarella obtained his trading advantage by misappropriating the property of his employer’s customers. The jury was charged that,
“[i]n simple terms, the charge is that Chiarella wrongfully took advantage of information he acquired in the course of his confidential position at Pandick Press and secretly used that information when he knew other people trading in the securities market did not have access to the same information that he had at a time when he knew that that information was material to the value of the stock.”
Record 677 (emphasis added). The language parallels that in the indictment, and the jury had that indictment during its deliberations; it charged that Chiarella had traded “without disclosing the material non-public information he had obtained in connection with his employment.” It is underscored by the clarity which the prosecutor exhibited in his opening statement to the jury. No juror could possibly have failed to understand what the case was about after the prosecutor said:
“In sum, what the indictment charges is that Chiarella misused material nonpublic information for personal gain and that he took unfair advantage of his position of trust with the full knowledge that it was wrong to do so. That is what the case is about. It is that simple.”
Id. at 46. Moreover, experienced defense counsel took no exception and uttered no complaint that the instructions were inadequate in this regard. [Therefore, the conviction is due to be affirmed].
I get no reading here. My guess is that this is some sort of legal document, in which case I’m not really surprised to get no reading. Is that correct?
Yes, it is a legal document. Specifically a dissent from the reversal of a criminal conviction. In particular, I think the quoted text is an incredibly immoral and wrong-headed understanding of American criminal law. Which makes it particularly depressing that the writer was Chief Justice when he wrote it
Yes, where names need to be changed. [God] will be sufficient to confuse me as to whether it’s “the LORD” or “Allah” in the original source material. There might be a problem with substance in very different holy books where I might be able to guess the religion just by what they’re saying (like if they talk about reincarnation or castes, I’ll know they’re Hindu or Buddhist). I hope anyone finding quotes will avoid those, of course.
This is a bit off-topic, but, out of curiosity, is there anything in particular that you find objectionable about Wicca on a purely analytical level ? I’m not saying that you must have such a reason, I’m just curious.
Just in the interests of pure disclosure, the reason I ask is because I found Wicca to be the least harmful religion among all the religions I’d personally encountered. I realize that, coming from an atheist, this doesn’t mean much, of course...
I’m actually not entirely sure what you mean by “incorrect”, and how it differs from “sinful”. As an atheist, I would say that Wicca is “incorrect” in the same way that every other religion is incorrect, but presumably you’d disagree, since you’re religious.
Some Christians would say that Wicca is both “incorrect” and “sinful” because its followers pray to the wrong gods, since a). YHVH/Jesus is the only God who exists, thus worshiping other (nonexistent) gods is incorrect, and b). he had expressly commanded his followers to worship him alone, and disobeying God is sinful. In this case, though, the “sinful” part seems a bit redundant (since Wiccans would presumably worship Jesus if they were convinced that he existed and their own gods did not). But perhaps you meant something else ?
I mean incorrect in that they believe things that are wrong, yes; they believe in, for instance, a goddess who doesn’t really exist. And sinful because witchcraft is forbidden.
Wouldn’t this imply that witchcraft is effective, though ? Otherwise it wouldn’t be forbidden; after all, God never said (AFAIK), “you shouldn’t pretend to cast spells even though they don’t really work”, nor did he forbid a bunch of other stuff that is merely silly and a waste of time. But if witchcraft is effective, it would imply that it’s more or less “correct”, which is why I was originally confused about what you meant.
FWIW, I feel compelled to point out that some Wiccans believe in multiple gods or none at all, even though this is off-topic—since I can practically hear my Wiccan acquaintances yelling at me in the back of my head… metaphorically speaking, that is.
Wouldn’t this imply that witchcraft is effective, though ?
Yes.
Ok, but in that case, isn’t witchcraft at least partially “correct” ? Otherwise, how can they cast all those spells and make them actually work (assuming, that is, that their spells actually do work) ?
Ah, right, so you believe that the entities that Wiccans worship do in some way exist, but that they are actually demons, not benign gods.
I should probably point out at this point that Wiccans (well, at least those whom I’d met), consider this point of view utterly misguided and incredibly offensive. No one likes to be called a “demon-worshiper”, especially when one is generally a nice person whose main tenet in life is a version of “do no harm”. You probably meant no disrespect, but flat-out calling a whole group of people “demon-worshipers” tends to inflame passions rather quickly, and not in a good way.
I should probably point out at this point that Wiccans (well, at least those whom I’d met), consider this point of view utterly misguided and incredibly offensive.
That’s a bizarre thing to say. Is their offense evidence that I’m wrong? I don’t think so; I’d expect it whether or not they worship demons. Or should I believe something falsely because the truth is offensive? That would go against my values—and, dare I say it, the suggestion is offensive. ;) Or do you want me to lie so I’ll sound less offensive? That risks harm to me (it’s forbidden by the New Testament) and to them (if no one ever tells them the truth, they can’t learn), as well as not being any fun.
No one likes to be called a “demon-worshiper”,
What is true is already so,
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
especially when one is generally a nice person whose main tenet in life is a version of “do no harm”.
Nice people like that deserve truth, not lies, especially when eternity is at stake.
flat-out calling a whole group of people “demon-worshipers” tends to inflame passions rather quickly,
So does calling people Cthulhu-worshipers. But when you read that article, you agreed that it was apt, right? Because you think it’s true. You guys sure seem quick to tell me that my beliefs are offensive, but if I said the same to you, you’d understand why that’s beside the point. If Wiccans worship demons, I desire to believe that Wiccans worship demons; if Wiccans don’t worship demons, I desire to believe that Wiccans don’t worship demons. Sure, it’s offensive and un-PC. If you want me to stop believing it, tell me why you think it’s wrong.
I like your post (and totally agree with the first paragraph), but have some concerns that are a little different from Bugmaster’s.
What’s the exact difference between a god and a demon? Suppose Wicca is run by a supernatural being (let’s call her Astarte) who asks her followers to follow commendable moral rules, grants their petitions when expressed in the ritualistic form of spells, and insists she will reward the righteous and punishes the wicked. You worship a different supernatural being who also asks His followers to follow commendable moral rules, grants their petitions when expressed in the ritualistic form of prayer, and insists He will reward the righteous and punish the wicked. If both Jehovah and Astarte exist and act similarly, why name one “a god” and the other “a demon”? Really, the only asymmetry seems to be that Jehovah tries to inflict eternal torture on people who prefer Astarte, where Astarte has made no such threats among people who prefer Jehovah, which is honestly advantage Astarte. So why not just say “Of all the supernatural beings out there, some people prefer this one and other people prefer that one”?
I mean, one obvious answer is certainly to list the ways Jehovah is superior to Astarte—the one created the Universe, the other merely lives in it; the one is all-powerful, the other merely has some magic; the one is wise and compassionate, the other evil and twisted. But all of these are Jehovah’s assertions. One imagines Astarte makes different assertions to her followers. The question is whose claims to believe.
Jehovah has a record of making claims which seem to contradict the evidence from other sources—the seven-day creation story, for example. And He has a history of doing things which, when assessed independently of their divine origin, we would consider immoral—the Massacre of the Firstborn in Exodus, or sanctioning the rape, enslavement, infanticide, and genocide of the Canaanites. So it doesn’t seem obvious at all that we should trust His word over Astarte’s, especially since you seem to think that Astarte’s main testable claim—that she does magic for her followers—is true.
Now, you’ve already said that you believe in Christianity because of direct personal revelation—a sense of serenity and rightness when you hear its doctrines, and a sense of repulsion from competing religions, and that this worked even when you didn’t know what religion you were encountering and so could not bias the result. I upvoted you when you first posted this because I agree that such feelings could provide some support for religious belief. But that was before you said you believed in competing supernatural beings. Surely you realize how difficult a situation that puts you in?
Giving someone a weak feeling of serenity or repulsion is, as miracles go, not a very flashy one. One imagines it would take only simple magic, and should be well within the repertoire of even a minor demon or spirit. And you agree that Astarte performs minor miracles of the same caliber all the time to try to convince her own worshippers. So all that your feelings indicate is that some supernatural being is trying to push you toward Christianity. If you already believe that there are multiple factions of supernatural beings, some of whom push true religions and others of whom push false ones, then noticing that some supernatural being is trying to push you toward Christianity provides zero extra evidence that Christianity is true.
Why should you trust the supernatural beings who have taken an interest in your case, as opposed to the supernatural beings apparently from a different faction who caused the seemingly miraculous revelations in this person and this person’s lives?
Since you use the names Jehovah and Astarte, I’ll follow suit, though they’re not the names I prefer.
The difference would be that if worship of Jehovah gets you eternal life in heaven, and worship of Astarte gets you eternal torture and damnation, then you should worship Jehovah and not Astarte. Also, if Astarte knows this, but pretends otherwise, then Astarte’s a liar.
If you already believe that there are multiple factions of supernatural beings, some of whom push true religions and others of whom push false ones, then noticing that some supernatural being is trying to push you toward Christianity provides zero extra evidence that Christianity is true.
Not quite. I only believe in “multiple factions of supernatural beings” (actually only two) because it’s implied by Christianity being true. It’s not a prior belief. If Christianity is false, one or two or fifteen or zero omnipotent or slightly-powerful or once-human or monstrous gods could exist, but if Christianity is false I’d default to atheism, since if my evidence for Christianity proved false (say, I hallucinated it all because of some undiagnosed mental illness that doesn’t resemble any currently-known mental illness and only causes that one symptom) without my gaining additional evidence for some other religion or non-atheist cosmology, I’d have no evidence for anything spiritual. Or do I misunderstand? I’m confused.
Why should you trust the supernatural beings who have taken an interest in your case, as opposed to the supernatural beings apparently from a different faction who caused the seemingly miraculous revelations in this person and this person’s lives?
Being, singular, first of all.
I already know myself, what kind of a person I am. I know how rational I am. I know how non-crazy I am. I know exactly the extent to which I’ve considered illness affecting my thoughts as a possible explanation.
I know I’m not lying.
The first person became an apostate, something I’ve never done, and is still confused years later. The second person records only the initial conversion, while I know how it’s played out in my own life for several years.
The second person is irrationally turned off by even the mere appearance of Catholicism and Christianity in general because of terrible experiences with Catholics.
I discount all miracle stories from people I don’t know, including Christian and Jewish miracle stories, which could at least plausibly be true. I discount them ALL when I don’t know the person. In fact, that means MOST of the stories I hear and consider unlikely (without passing judgment when I have so little info) are stories that, if true, essentially imply Christianity, while others would provide evidence for it.
And knowing how my life has gone, I know how I’ve changed as a person since accepting Jesus, or Jehovah if that’s the word you prefer. They don’t mention drastic changes to their whole personalities to the point of near-unrecognizability even to themselves. In brief: I was unbelievably awful. I was cruel, hateful, spiteful, vengeful and not a nice person. I was actively hurtful toward everyone, including immediate family. After finding Jesus, I slowly became a less horrible person, until I got to where I am now. Self-evaluation may be somewhat unreliable, but I think the lack of any physical violence recently is a good sign. Also, rather than escalating arguments as far as possible, when I realize I’ve lashed out, I deliberately make an effort not to fall prey to consistency bias and defend my actions, but to stop and apologize and calm down. That’s something I would not have done—would not have WANTED to do, would not have thought was a good idea, before.
And you agree that Astarte performs minor miracles of the same caliber all the time to try to convince her own worshippers.
I don’t know (I only guess) what Astarte does to xyr worshipers. I’m conjecturing; I’ve never prayed to xem, nor have I ever been a Wiccan or any other type of non-Christian religion. But I think I ADBOC this statement; if said by me, it would have sounded more like “Satan makes xyrself look very appealing”.
(I’m used to a masculine form for this being. You’re using a feminine form. Rather than argue, I’ve simply shifted my pronoun usage to an accurate—possibly more accurate—and less loaded set of pronouns.)
Also, my experience suggests that if something is good or evil, and you’re open to the knowledge, you’ll see through any lies or illusions with time. It might be a lot of time—I’ll confess I recently got suckered into something for, I think, a couple of years, when I really ought to have known better much sooner, and no, I don’t want to talk about it—but to miss it forever requires deluding yourself.
(Not, as we all know, that self-delusion is particularly rare...)
So all that your feelings indicate is that some supernatural being is trying to push you toward Christianity.
That someone is trying to convince me to be a Christian or that I perceive the nature of things using an extra sense.
Giving someone a weak feeling of serenity or repulsion is, as miracles go, not a very flashy one.
Strength varies. Around the time I got to the fourth Surah of the Koran, it was much flashier than anything I’ve seen since, including everything previously described (on the negative side) at incredible strength plus an olfactory hallucination. And the result of, I think, two days straight of Bible study and prayer at all times constantly… well, that was more than a weak feeling of serenity. But on its own it’d be pretty weak evidence, because I was only devoting so much time to prayer because my state of mind was so volatile and my thoughts and feelings were unreliable. It’s only repetitions of that effect that let me conclude that it means what I’ve already listed, after controlling for other possibilities that are personal so I don’t want to talk about it. Those are rare extremes, though; normally it’s not as flashy as those.
you seem to think that Astarte’s main testable claim—that she does magic for her followers—is true.
I consider it way likelier than you do, anyway. I’m only around fiftyish percent confidence here. But that’s only one aspect of it. Their religion also claims to cause changes in its followers along the lines of “more in tune with the Divine” or something, right? So if there are any overlapping claims about morality, that would also be testable—NOT absolute morality of the followers, but change in morality on mutually-believed-in traits, measuring before and after conversion, then a year on, then a few years on, then several years on. Of course, I’m not sure how you’ll ever get the truth about how moral people are when they think no one’s watching...
Sorry—I used “Astarte” and the female pronoun because the Wiccans claim to worship a Goddess, and Astarte was the first female demon I could think of. If we’re going to go gender-neutral, I recommend “eir”, just because I think it’s the most common gender neutral pronoun on this site and there are advantages to standardizing this sort of thing.
The difference would be that if worship of Jehovah gets you eternal life in heaven, and worship of Astarte gets you eternal torture and damnation, then you should worship Jehovah and not Astarte.
Well, okay, but this seems to be an argument from force, sort of “Jehovah is a god and Astarte a demon because if I say anything else, Jehovah will torture me”. It seems to have the same form as “Stalin is not a tyrant, because if I call Stalin a tyrant, he will kill me, and I don’t want that!”
Not quite. I only believe in “multiple factions of supernatural beings” (actually only two) because it’s implied by Christianity being true.
It sounds like you’re saying the causal history of your belief should affect the probability of it being true.
Suppose before you had any mystical experience, you had non-zero probabilities X of atheism, Y of Christianity (in which God promotes Christianity and demons promote non-Christian religions like Wicca), and Z of any non-Christian religion (in which God promotes that religion and demons promote Christianity).
Then you experience an event which you interpret as evidence for a supernatural being promoting Christianity. This should raise the probability of Y and Z an equal amount, since both theories seem to equally predict this would happen.
You could still end up a Christian if you started off with a higher probability Y than Z, but it sounds like you weren’t especially interested in Christianity before your mystical experience, and the prior for Z is higher than Y since there are so many more non-Christian than Christian religions.
Being, singular, first of all...
I understand you as having two categories of objections: first, objections that the specific people in the Islamic conversion stories are untrustworthy or their stories uninteresting (3,4,6). Second, that you find mystical experiences by other people inherently hard to believe but you believe your own because you are a normal sane person (1,2,5).
The first category of objections apply only to those specific people’s stories. That’s fair enough since those were the ones I presented, but they were the ones I presented because they were the first few good ones I found in the vast vast vast vast VAST Islamic conversion story literature. I assume that if you were to list your criteria for believability, we could eventually find some Muslim who experienced a seemingly miraculous conversion who fit all of those criteria (including changing asa person) - if it’s important to you to test this, we can try.
The second category of objections is more interesting. Different studies show somewhere from a third to half of Americans having mystical experiences, including about a third of non-religious people who have less incentive to lie. Five percent of people experience them “regularly”. Even granted that some of these people are lying and other people categorize “I felt really good” as a mystical experience, I don’t think denying that these occur is really an option.
The typical view that people need to be crazy, or on the brink of death, or uneducated, or something other than a normal middle class college-educated WASP adult in order to have mystical experiences also breaks down before the evidence. According to Greeley 1975 and Hay and Morisy 1976, well-educated upper class people are more likely to have mystical experiences, and Hay and Morisy 1978 found that people with mystical experiences are more likely to be mentally well-balanced.
Since these experiences occur with equal frequency among people of all religion and even atheists, I continue to think this supports either the “natural mental process” idea or the “different factions of demons” idea—you can probably guess which one I prefer :)
Also, my experience suggests that if something is good or evil, and you’re open to the knowledge, you’ll see through any lies or illusions with time.
There are 1.57 billion Muslims and 2.2 billion Christians in the world. Barring something very New-Agey going on, at least one of those groups believes an evil lie. The number of Muslims who convert to Christianity at some point in their lives, or vice versa, is only a tiny fraction of a percent. So either only a tiny fraction of a percent of people are open to the knowledge—so tiny that you could not reasonably expect yourself to be among them—or your experience has just been empirically disproven.
(PS: You’re in a lot of conversations at once—let me know if you want me to drop this discussion, or postpone it for later)
Speaking of mystical experiences, my religion tutor at the university (an amazing woman, Christian but pretty rational and liberal) had one, as she told us, in transport one day, and that’s when she converted, despite growing up at an atheistic middle-class Soviet family.
Oh, and the closest thing I ever had to one was when I tried sensory deprivation + dissociatives (getting high on cough syrup, then submersing myself in a warm bath with lights out and ears plugged; had a timer set to 40 minutes and a thin ray of light falling where I could see it by turning my head as precaution against, y’know, losing myself). That experiment was both euphoric and interesting, but I wouldn’t really want to repeat it. I experienced blissful ego death and a feeling of the universe spinning round and round in cycles, around where I would be, but where now was nothing. It’s hard to describe.
And then, well, I saw the tiny, shining shape of Rei Ayanami. She was standing in her white plugsuit amidst the blasted ruins on a dead alien world, and I got the feeling that she was there to restore it to life. She didn’t look at me, but I knew she knew I saw her. Then it was over.
Fret not, I didn’t really make any more bullshit out of that, but it’s certainly an awesome moment to remember.
Second, that you find mystical experiences by other people inherently hard to believe but you believe your own because you are a normal sane person (1,2,5).
Unless I know them already. Once I already know people for honest, normal, sane people (“normal” isn’t actually required and I object to the typicalist language), their miracle stories have the same weight as my own. Also, miracles of more empirically-verifiable sorts are believable when vetted by snopes.com.
If we’re going to go gender-neutral, I recommend “eir”, just because I think it’s the most common gender neutral pronoun on this site and there are advantages to standardizing this sort of thing.
Xe is poetic and awesome. I’m hoping it’ll become standard English. To that end, I use it often.
(including changing as a person)
I read your first link and I’m very surprised because I didn’t expect something like that. It would be interesting to talk to that person about this.
So either only a tiny fraction of a percent of people are open to the knowledge—so tiny that you could not reasonably expect yourself to be among them -
Is that surprising? First of all, I know that I already converted to Christianity, rather than just having assumed it always, so I’m already more likely to be open to new facts. And second, I thought it was common knowledge around these parts that most people are really, really bad at finding the truth. How many people know Bayes? How many know what confirmation bias is? Anchoring? The Litany of Tarski? Don’t people on this site rail against how low the sanity waterline is? I mean, you don’t disagree that I’m more rational than most Christians and Muslims, right?
Different studies show somewhere from a third to half of Americans having mystical experiences, including about a third of non-religious people who have less incentive to lie. Five percent of people experience them “regularly”.
Do they do this by using tricks like Multiheaded described? Or by using mystical plants or meditation? (I know there are Christians who think repeating a certain prayer as a mantra and meditating on it for a long time is supposed to work… and isn’t there, or wasn’t there, some Islamic sect where people try to find God by spinning around?) If so, that really doesn’t count. Is there another study where that question was asked? Because if you’re asserting that mystical experiences can be artificially induced by such means in most if not all people, then we’re in agreement.
Well, okay, but this seems to be an argument from force, sort of “Jehovah is a god and Astarte a demon because if I say anything else, Jehovah will torture me”. It seems to have the same form as “Stalin is not a tyrant, because if I call Stalin a tyrant, he will kill me, and I don’t want that!”
I was thinking more along the lines of “going to hell is a natural consequence of worshiping Astarte”, analogous to “if I listen to my peers and smoke pot, I won’t be able to sing, whereas if I listen to my mother and drink lots of water, I will; therefore, my mother is right and listening to my peers is bad”. I hadn’t even considered it from that point of view before.
Is that surprising? … Don’t people on this site rail against how low the sanity waterline is? I mean, you don’t disagree that I’m more rational than most Christians and Muslims, right?
No, I suppose it’s not surprising. I guess I misread the connotations of your claim. Although I am still not certain I agree: I know some very rational and intelligent Christians, and some very rational and intelligent atheists (I don’t really know many Muslims, so I can’t say anything about them). At some point I guess this statement is true by definition, since we can define open-minded as “open-minded enough to convert religion if you have good enough evidence to do so.” But I can’t remember where we were going with this one so I’ll shut up about it.
Do they do this by using tricks like Multiheaded described? Or by using mystical plants or meditation? (I know there are Christians who think repeating a certain prayer as a mantra and meditating on it for a long time is supposed to work… and isn’t there, or wasn’t there, some Islamic sect where people try to find God by spinning around?) If so, that really doesn’t count. Is there another study where that question was asked? Because if you’re asserting that mystical experiences can be artificially induced by such means in most if not all people, then we’re in agreement.
I was unable to find numerical data on this. I did find some assertions in the surveys that some of the mystical experience was untriggered, I found one study comparing 31 people with triggered mystical experience to 31 people with untriggered mystical experience (suggesting it’s not too hard to get a sample of the latter), and I have heard anecdotes from people I know about having untriggered mystical experience.
Honestly I had never really thought of that as an important difference. Keep in mind that it’s really weird that the brain responds to relatively normal stressors, like fasting or twirling or staying still for two long, by producing this incredible feeling of union with God. Think of how surprising this would be if you weren’t previously aware of it, how complex a behavior this is, as opposed to something simpler like falling unconscious. The brain seems to have this built-in, surprising tendency to have mystical experiences, which can be triggered by a lot of different things.
As someone in the field of medicine, this calls to mind the case of seizures, another unusual mental event which can be triggered in similar conditions. Doctors have this concept called the “seizure threshold”. Some people have low seizure thresholds, other people high seizure thresholds. Various events—taking certain drugs, getting certain diseases, being very stressed, even seeing flashing lights in certain patterns—increases your chance of having a seizure, until it passes your personal seizure threshold and you have one. And then there are some people—your epileptics—who can just have seizures seemingly out of nowhere in the course of everyday life (another example is that some lucky people can induce orgasm at will, whereas most of us only achieve orgasm after certain triggers).
I see mystical experiences as working a lot like seizures—anyone can have one if they experience enough triggers, and some people experience them without any triggers at all. It wouldn’t be at all parsimonous to say that some people have this reaction when they skip a few meals, or stay in the dark, or sit very still, and other people have this reaction when they haven’t done any of these things, but these are caused by two completely different processes.
I mean, if we already know that dreaming up mystical experiences is the sort of thing the brain does in some conditions, it’s a lot easier to expand that to “and it also does that in other conditions” than to say “but if it happens in other conditions, it is proof of God and angels and demons and an entire structure of supernatural entities.”
I was thinking more along the lines of “going to hell is a natural consequence of worshiping Astarte”, analogous to “if I listen to my peers and smoke pot, I won’t be able to sing, whereas if I listen to my mother and drink lots of water, I will; therefore, my mother is right and listening to my peers is bad”. I hadn’t even considered it from that point of view before.
The (relatively sparse) Biblical evidence suggests an active role of God in creating Hell and damning people to it. For example:
“This is how it will be at the end of the age. The angels will come and separate the wicked from the righteous and throw them into the blazing furnace, where there will be weeping and gnashing of teeth.” (Matthew 13:49)
“Depart from me, you accursed, into the eternal fire that has been prepared for the devil and his angels!” (Matthew 25:41)
“If anyone’s name was not found written in the book of life, that person was thrown into the lake of fire.” (Revelations 20:15)
“God did not spare angels when they sinned, but sent them to hell, putting them into gloomy dungeons to be held for judgment” (2 Peter 2:4)
“Fear him who, after the killing of the body, has power to throw you into hell. Yes, I tell you, fear him.” (Luke 12:5)
That last one is particularly, um, pleasant. And it’s part of why it is difficult for me to see a moral superiority of Jehovah over Astarte: of the one who’s torturing people eternally, over the one who fails to inform you that her rival is torturing people eternally.
I was thinking more along the lines of “going to hell is a natural consequence of worshiping Astarte”, analogous to “if I listen to my peers and smoke pot, I won’t be able to sing, whereas if I listen to my mother and drink lots of water, I will; therefore, my mother is right and listening to my peers is bad”. I hadn’t even considered it from that point of view before.
To return to something I pointed out far, far back in this thread, this is not analagous. Your mother does not cause you to lose your voice for doing the things she advises you not to do. On the other hand, you presumably believe that god created hell, or at a minimum, he tolerates its existence (unless you don’t think God is omnipotent).
(As an aside, another point against the homogeneity you mistakenly assumed you would find on Lesswrong when you first showed up is that not everyone here is a complete moral anti-realist. For me, that one cannot hold the following three premises without contradiction is sufficient to discount any deeper argument for Christianity:
Inflicting suffering is immoral, and inflicting it on an infinite number of people or for an inifinite duration is infinitely immoral
The Christian God is benevolent.
The Christian God allows the existence of Hell.
Resorting to, “Well, I don’t actually know what hell is” is blatant rationalization.)
You don’t actually need to be a moral realist to make that argument; you just need to notice the tension between the set of behavior implied by the Christian God’s traditional attributes and the set of behavior Christian tradition claims for him directly. That in itself implies either a contradiction or some very sketchy use of language (i.e. saying that divine justice allows for infinitely disproportionate retribution).
I think it’s a weakish argument against anything less than a strictly literalist interpretation of the traditions concerning Hell, though. There are versions of the redemption narrative central to Christianity that don’t necessarily involve torturing people for eternity: the simplest one that I know of says that those who die absent a state of grace simply cease to exist (“everlasting life” is used interchangeably with “heaven” in the Bible), although there are interpretations less problematic than that as well.
The (modern) Orthodox opinion that my tutor relayed to us is that Hell isn’t a place at all, but a condition of the soul where it refuses to perceive/accept God’s grace at all and therefore shuts itself out from everything true and meaningful that can be, just wallowing in despair; it exists in literally no-where, as all creation is God’s, and the refusal of God is the very essence of this state. She dismissed all suggestions of sinners’ “torture” in hell—especially by demonic entities—as folk religion.
(Wait, what’s that, looks like either I misquoted her a little or she didn’t quite give the official opinion...)
One expression of the Eastern teaching is that hell and heaven are being in God’s presence, as this presence is punishment and paradise depending on the person’s spiritual state in that presence.[29][32] For one who hates God, to be in the presence of God eternally would be the gravest suffering…
…Some Eastern Orthodox express personal opinions that appear to run counter to official church statements, in teaching hell is separation from God.
I’ve heard that one too, but I’m not sure how functionally different from pitchforks and brimstone I’d consider it to be, especially in light of the idea of a Last Judgment common to Christianity and Islam.
Oh, there’s a difference alright, one that could be cynically interpreted as an attempt to dodge the issue of cruel and disproportionate punishment by theologians. The version above suggests that God doesn’t ever actively punish anyone at all, He simply refuses to force His way to someone who rejects him, even if they suffer as a result. That’s sometimes assumed to be due to God’s respect for free will.
Yeah. Thing is, we’re dealing with an entity who created the system and has unbounded power within it. Respect for free will is a pretty good excuse, but given that it’s conceivable for a soul to be created that wouldn’t respond with permanent and unspeakable despair to separation from the Christian God (or to the presence of a God whom the soul has rejected, in the other scenario), making souls that way looks, at best, rather irresponsible.
If I remember right the standard response to that is to say that human souls were created to be part of a system with God at its center, but that just raises further questions.
What, so god judges that eternal torture is somehow preferable to violating someones free will by inviting them to eutopia?
I am so tired of theists making their god so unable to be falsified that he becomes useless. Let’s assume for a moment that some form of god actually exists. I don’t care how much he loves us in his own twisted little way, I can think of 100 ways to improve the world and he isn’t doing any of them. It seems to me that we ought to be able to do better than what god has done, and in fact we have.
The standard response to theists postulating a god should be “so what?”.
I mean, you don’t disagree that I’m more rational than most Christians and Muslims, right?
Actually, I do. You use the language that rationalists use. However, you don’t seem to have considered very many alternate hypothesis. And you don’t seem to have performed any of the obvious tests to make sure you’re actually getting information out of your evidence.
For instance, you could have just cut up a bunch of similarly formatted stories from different sources, (or even better, have had a third party do it for you, so you don’t see it,) stuck them in a box and pulled them out at random—sorting them into Bible and non-Bible piles according to your feelings. If you were getting the sort of information out that would go some way towards justifying your beliefs, you should easily beat random people of equal familiarity with the Bible.
Rationality is a tool, and if someone doesn’t use it, then it doesn’t matter how good a tool they have; they’re not a rationalist any more than someone who owns a gun is a soldier. Rationalists have to actually go out and gather/analyse the data.
(Edit to change you to someone for clarity’s sake.)
For instance, you could have just cut up a bunch of similarly formatted stories from different sources, (or even better, have had a third party do it for you, so you don’t see it,) stuck them in a box and pulled them out at random—sorting them into Bible and non-Bible piles according to your feelings. If you were getting the sort of information out that would go some way towards justifying your beliefs, you should easily beat random people of equal familiarity with the Bible.
No, I couldn’t have for two reasons. By the time I could have thought of it I would have recognized nearly all the Bible passages as Biblical and to obscure meaning would require such short quotes I’d never be able to tell. Those are things I already explained—you know, in the post where I said we should totally test this, using a similar experiment.
No, I couldn’t have for two reasons. By the time I could have thought of it I would have recognized nearly all the Bible passages as Biblical and to obscure meaning would require such short quotes I’d never be able to tell. Those are things I already explained—you know, in the post where I said we should totally test this, using a similar experiment.
If that’s the stance you’re going to take, it seems destructive to the idea that I should consider you rational. You proposed a test to verify your belief that could not be performed; in the knowledge that, if it was, it would give misleading results.
Minor points:
There’s more than just one bible out there. Unless you’re a biblical scholar, the odds that there’s nothing from a bible that you haven’t read are fairly slim.
‘nearly all’ does leave you with some testable evidence. The odds that it just happens to be too short a test for your truth-sensing faculty to work are, I think, fairly slim.
People tend not to have perfect memories. Even if you are a biblical scholar the odds are that you will make mistakes in this, as you would in anything else, and information gained from the intuitive faculty would be expressed as a lower error rate than like-qualified people.
If that’s the stance you’re going to take, it seems destructive to the idea that I should consider you rational. You proposed a test to verify your belief that could not be performed; in the knowledge that, if it was, it would give misleading results.
Similar test. Not the same test. It was a test that, though still flawed, fixed those two things I could see immediately (and in doing so created other problems).
People tend not to have perfect memories. Even if you are a biblical scholar the odds are that you will make mistakes in this, as you would in anything else, and information gained from the intuitive faculty would be expressed as a lower error rate than like-qualified people.
Similar test. Not the same test. It was a test that, though still flawed, fixed those two things I could see immediately (and in doing so created other problems).
I don’t see that it would have fixed those things. We could, perhaps, come up with a more useful test if we discussed it on a less hostile footing. But, at the moment, I’m not getting a whole lot of info out of the exchange and don’t think it worth arguing with you over quite why your test wouldn’t work, since we both agree that it wouldn’t.
Want to test this?
Not really. It’s not that sort of thing where the outputs of the test would have much value for me. I could easily get 100% of the quotes correct by sticking them into google, as could you. The only answers we could accept with any significant confidence would be the ones we didn’t think the other person was likely to lie about.
My beliefs in respect to claims about the supernatural are held with a high degree of confidence, and pushing them some tiny distance towards the false end of the spectrum is not worth the hours I would have to invest.
For the same reason that if I had a see-an-image-of-Grandpa button, and pushed it, I wouldn’t count the fact that I saw him as evidence that he’s somehow still alive, but if I saw him right now spontaneously, I would.
For the same reason that if I had a see-an-image-of-Grandpa button, and pushed it, I wouldn’t count the fact that I saw him as evidence that he’s somehow still alive, but if I saw him right now spontaneously, I would.
Imagine that you have a switch in your home which responds to your touch by turning on a lamp (this probably won’t take much imagination). One day this lamp, which was off, suddenly and for no apparent reason turns on. Would you assign supernatural or mundane causes to this event?
Now this isn’t absolute proof that the switch wasn’t turned on by something otherworldly; perhaps it responds to both mundane and supernatural causes. But, well, if I may be blunt, Occam’s Razor. If your best explanations are “the Hand of Zeus” and “Mittens, my cat,” then …
I assume much the same things about this as any other sense: it’s there to give information about the world, but trickable. I mean, how tired you feel is a good measure of how long it’s been since you’ve slept, but you can drink coffee and end up feeling more energetic than is merited. So if I want to be able to tell how much sleep I really need, I should avoid caffeine. That doesn’t mean the existence of caffeine makes your subjective feelings of your own energy level arbitrary or worthless.
I assume much the same things about this as any other sense: it’s there to give information about the world, but trickable.
Interestingly, this sounds like the way that I used to view my own spiritual experiences. While I can’t claim to have ever had a full-blown vision, I have had powerful, spontaneous feelings associated with prayer and other internal and external religious stimuli. I assumed that God was trying to tell me something. Later, I started to wonder why I was also having these same powerful feelings at odd times clearly not associated with religious experiences, and in situations where there was no message for me as far as I could tell.
On introspection, I realized that I associated this with God because I’d been taught by people at church to identify this “frisson” with spirituality. At the time, it was the most accessible explanation. But there was no other reason for me to believe that explanation over a natural one. That I was getting data that seemed to contradict the “God’s spirit” hypothesis eventually led to an update.
Unfortunately, the example you’re drawing the analogy to is just as unclear to me as the original example I’d requested an explanation of.
I mean, I agree that seeing an image of my dead grandfather isn’t particularly strong evidence that he’s alive. Indeed, I see images of dead relatives on a fairly regular basis, and I continue to believe that they’re dead. But I think that’s equally true whether I deliberately invoked such an image, or didn’t.
I get that you think it is evidence that he’s alive when the image isn’t deliberately invoked, and I can understand how the reason for that would be the same as the reason for thinking that a mystical experience “counts” when it isn’t deliberately invoked, but I am just as unclear about what that reason is as I was to start with.
If I suddenly saw my dead grandpa standing in front of me, that would be sufficiently surprising that I’d want an explanation. It’s not sufficiently strong to make me believe by itself, but I’d say hello and see if he answered, and if he sounded like my grandpa, and then tell him he looks like someone I know and see the reaction, and if he reacts like Grandpa, I touch him to ascertain that he’s corporeal, then invite him to come chat with me until I wake up, and assuming that everything else seems non-dream-like (I’ll eventually have to read something, providing an opportunity to test whether or not I’m dreaming, plus I can try comparing physics to how they should be, perhaps by trying to fly), I’d tell my mom he’s here.
Whereas if I had such a button, I’d ignore the image, because it wouldn’t be surprising. I suppose looking at photographs is kind of like the button.
Well, wait up. Now you’re comparing two conditions with two variables, rather than one.
That is, not only is grandpa spontaneous in case A and button-initiated in case B, but also grandpa is a convincing corporeal fascimile of your grandpa in case A and not any of those things in case B. I totally get how a convincing fascimile of grandpa would “count” where an unconvincing image wouldn’t (and, by analogy, how a convincing mystical experience would count where an unconvincing one wouldn’t) but that wasn’t the claim you started out making.
Suppose you discovered a button that, when pressed, created something standing in front of you that looked like your dead grandpa , sounded and reacted like your grandpa, chatted with you like you believe your grandpa would, etc. Would you ignore that?
It seems like you’re claiming that you would, because it wouldn’t be surprising… from which I infer that mystical experiences have to be surprising to count (which had been my original question, after all). But I’m not sure I properly understood you.
For my own part, if I’m willing to believe that my dead grandpa can come back to life at all, I can’t see why the existence of a button that does this routinely should make me less willing to believe it .
The issue is that there is not a reliable “see-an-image-of-Grandpa button” in existence for mystical experiences. In other words, I’m unaware of any techniques that reliably induce mystical experiences. Since there are no techniques for reliably inducing mystical experiences, there is no basis for rejecting some examples of mystical experience as “unnatural/artificial mystical experiences.”
As an aside, if you are still interested in evaluating readings, I would be interested in your take on this one
The issue is that there is not a reliable “see-an-image-of-Grandpa button” in existence for mystical experiences. In other words, I’m unaware of any techniques that reliably induce mystical experiences.
You’ve stated that you judge morality on a consequentialist basis. Now you state that going to hell is somehow not equivalent to god torturing you for eternity. What gives?
Also: You believe in god because your belief in god implies that you really ought to believe in god? What? Is that circular or recursivly justified? If the latter, please explain.
It’s not exactly rigorous, but you could try leaving bagels at Christian and Wiccan gatherings of approximately the same size and see how many dollars you get back.
That’s an idea, but you’d need to know how they started out. If generally nice people joined one religion and stayed the same, and generally horrible people joined the other and became better people, they might look the same on the bagel test.
True. You could control for that by seeing if established communities are more or less prone to stealing bagels than younger ones, but that would take a lot more data points.
Indeed. Or you could test the people themselves individually. What if you got a bunch of very new converts to various religions, possibly more than just Christianity and Wicca, and tested them on the bagels and gave them a questionnaire containing some questions about morals and some about their conversion and some decoys to throw them off, then called them back again every year for the same tests, repeating for several years?
I don’t really trust self-evaluation for questions like this, unfortunately—it’s too likely to be confounded by people’s moral self-image, which is exactly the sort of thing I’d expect to be affected by a religious conversion. Bagels would still work, though.
Actually, if I was designing a study like this I think I’d sign a bunch of people up ostensibly for longitudial evaluation on a completely different topic—and leave a basket of bagels in the waiting room.
What about a study ostensibly of the health of people who convert to new religions? Bagels in the waiting room, new converts, random not-too-unpleasant medical tests for no real reason? Repeat yearly?
The moral questionnaire would be interesting because people’s own conscious ethics might reflect something cool and if you’re gonna test it anyway… but on the other hand, yeah. I don’t trust them to evaluate how moral they are, either. But if people signal what they believe is right, then that means you do know what they think is good. You could use that to see a shift from no morals at all to believing morals are right and good to have. And just out of curiosity, I’d like to see if they shifted from deontologist to consequentialist ethics, or vice versa.
People don’t necessarily signal what they think is right; sometimes they signal attitudes they think other people want them to possess. Admittedly, in a homogenous environment that can cause people to eventually endorse what they’ve been signaling.
Yes, definitely. Or in a waiting room. “Oops, sorry, we’re running a little late. Wait here in this deserted waiting room till five minutes from now, bye. :)” Otherwise, they might not see them.
The difference would be that if worship of Jehovah gets you eternal life in heaven, and worship of Astarte gets you eternal torture and damnation, then you should worship Jehovah and not Astarte. Also, if Astarte knows this, but pretends otherwise, then Astarte’s a liar.
Or perhaps neither Jehovah nor Astarte knows now who will dominate in the end, and any promises either makes to any followers are, ahem, over-confident? :-) There was a line I read somewhere about how all generals tell their troops that their side will be victorious...
So you’re assuming both sides are in a duel, and that the winner will send xyr worshipers to heaven and the loser’s worshipers to hell? Because I was not.
Only Jehovah. He says that he’s going to send his worshipers to heaven and Astarte’s to hell. Astarte says neither Jehovah nor she will send anyone anywhere. Either one could be a liar, or they could be in a duel and each describing what happens if xe wins.
Only as a hypothetical possibility. (From such evidence as I’ve seen I don’t think either really exists. And I have seen a fair number of Wiccan ceremonies—which seem like reasonably decent theater, but that’s all.) One could construe some biblical passages as predicting some sort of duel—and if one believed those passages, and that interpretation, then the question of whether one side was overstating its chances would be relevant.
I know how non-crazy I am. I know exactly the extent to which I’ve considered illness affecting my thoughts as a possible explanation.
Maybe I’m lacking context, but I’m not sure why you bring this up. Has anyone here described religious beliefs as being characteristically caused by mental illness? I’d be concerned if they had, since such a statement would be (a) incorrect and (b) stigmatizing.
Has anyone here described religious beliefs as being characteristically caused by mental illness? I’d be concerned if they had, since such a statement would be (a) incorrect and (b) stigmatizing.
In this post, Eliezer characterized John C. Wright’s conversion to Catholicism as the result of a temporal lobe epileptic fit and said that at least some (not sure if he meant all) religious experiences were “brain malfunctions.”
The relevant category is probably not explanations for religious beliefs, but rather explanations of experiences such as AK has reported of what, for lack of a better term, I will call extrasensory perception. Most of the people I know who have religious beliefs don’t report extrasensory perception, and most of the people I know who report extrasensory perception don’t have religious beliefs. (Though of the people I know who do both, a reasonable number ascribe a causal relationship between them. The direction varies.)
But, mental illness is not required to experience strong, odd feelings or even to “hear voices”. Fully-functional human brains can easily generate such things.
Religious experience isn’t usually pathologized in the mainstream (academically or by laypeople) unless it makes up part of a larger pattern of experience that’s disruptive to normal life, but that doesn’t say much one way or another about LW’s attitude toward it.
My experience with LW’s attitude has been similar, though owing to a different reason. Religion generally seems to be treated here as the result of cognitive bias, same as any number of other poorly setup beliefs.
Though LW does tend to use the word “insane” in a way that includes any kind of irrational cognition, I so far have interpreted that to mostly be slang, not meant to literally imply that all irrational cognition is mental illness (although the symptoms of many mental illnesses can be seen as a subset of irrational cognition).
Though LW does tend to use the word “insane” in a way that includes any kind of irrational cognition, I so far have interpreted that to mostly be slang, not meant to literally imply mental illness (although the symptoms of many mental illnesses can be seen as a subset of irrational cognition).
Not having certain irrational biases can be said to be a subset of mental illness.
How so? I can only think of Straw Vulcan examples.
A subset of those diagnosed or diagnosable with high functioning autism and a subset of the features that constitute that label fit this category. Being rational is not normal.
(Or, by “can be said”, do you mean to imply that you disagree with the statement?)
I don’t affiliate myself with the DSM, nor does it always representative of an optimal way of carving reality. In this case I didn’t want to specify one way or the other.
tl;dr for the last two comments (Just to help me understand this; if I misrepresent anyone, please call me out on it.)
Yvain: So you believe in multiple factions of supernatural beings, why do you think Jehovah is the benevolent side? Other gods have done awesomecool stuff too, and Jehovah’s known to do downright evil stuff.
AK: Not multiple factions, just two. As to why I think Jehovah’s the good guy.....
And knowing how my life has gone, I know how I’ve changed as a person since accepting Jesus, or Jehovah if that’s the word you prefer. They don’t mention drastic changes to their whole personalities to the point of near-unrecognizability even to themselves.
Don’t you think that’s an unjustified nitpick? Absolutely awful people are rare, people who have revelations are rarer, so obviously absolutely awful people who had revelations have to be extremely difficult to find. So it’s not really surprising that two links someone gave you don’t mention a story like that.
But I think you’re assuming that the hallmark of a true religion is that it drastically increases the morality of its adherents. And that’s an assumption you have no grounds for—all that happened in your case was that the needle of your moral compass swerved from ‘absolute scumbag’ to ‘reasonably nice person’. There’s no reason to generalise that and believe that the moral compass of a reasonably nice person would swerve further to ‘absolute saint’.
Anyhow, your testable prediction is ‘converts to false religions won’t show moral improvement’. I doubt there’s any data on stuff like that right now (if there is, my apologies), so we have to rely on anecdotal evidence. The problem with that, of course, is that it’s notoriously unreliable… If it doesn’t show what you want it to show, you can just dismiss it all as lies or outliers or whatever. Doesn’t really answer any questions.
And if you’re willing to consider that kind of anecdotal evidence, why not other kinds of anecdotal evidence that sound just as convincing?
I discount all miracle stories from people I don’t know, including Christian and Jewish miracle stories, which could at least plausibly be true. I discount them ALL when I don’t know the person.
And yet.… Back to your premise. Even if your personality changed for the better… How does this show in any way that Jehovah’s a good guy? Surely even an evil daemon has no use for social outcasts with a propensity for random acts of violence; a normal person would probably serve them better. And how do you answer Yvain’s point about all the evil Jehovah has done? How do you know he’s the good guy
....
Everyone else: Why are we playing the “let’s assume everything you say is true” game anyway? Surely it’d be more honest to try and establish that his mystical experiences were all hallucinations?
Well, now that you mention it… I infer that if you read someone’s user page and got sensation A or B off of it, you would consider that evidence about the user’s morality. Yes? No?
Yes. But it would be more credible to other people, and make for a publishable study, if we used some other measure. It’d also be more certain that we’d actually get information.
Obviously I can’t speak for AK, but maybe she believes that she has been epistemically lucky. Compare the religious case:
“I had this experience which gave me evidence for divinity X, so I am going to believe in X. Others have had analogous experiences for divinities Y and Z, but according to the X religion I adopted those are demonic, so Y and Z believers are wrong. I was lucky though, since if I had had a Y experience I would have become a Y believer”.
with philosophical cases like the ones Alicorn discusses there:
“I accept philosophical position X because of compelling arguments I have been exposed to. Others have been exposed to seemingly compelling arguments for positions Y and Z, but according to X these arguments are flawed, so Y and Z believers are wrong. I was lucky though, since if I had gone to a university with Y teachers I would have become a Y believer”.
It may be that the philosopher is also being irrational here and that she could strive more to trascend her education and assess X vs Y impartially, but in the end it is impossible to escape this kind of irrationality at all levels at once and assess beliefs from a perfect vaccuum. We all find some things compelling and not others because of the kind of people we are and the kind of lives we have lived, and the best we can get is reflective equilibrium. Recursive justification hitting bottom and all that.
The question is whether AK is already in reflective equilibrium or if she can still profit from some meta-examination and reassess this part of her belief system. (I believe that some religious believers have reflected enough about their beliefs and the counterarguments to them that they are in this kind of equilibrium and there is no further argument from an atheist that can rationally move them—though these are a minority and not representative of typical religious folks.)
See my response here—if Alicorn is saying she knows the other side has arguments exactly as convincing as those which led her to her side, but she is still justified to continue believing her side more likely than the other, I disagree with her.
What is true is already so, Owning up to it doesn’t make it worse. Not being open about it doesn’t make it go away.
You’re doing it wrong. The power of the Litany comes from evidence. Every time you applying the Litany of Gendlin to an unsubstantiated assertion, a fairie drops dead.
“Ish,” yes. I have to admit I’ve had a hard time navigating this enormous thread, and haven’t read all of it, including the evidence of demonic influence you’re referring to. However, I predict in advance that 1) this evidence is based on words that a man wrote in an ancient book, and that 2) I will find this evidence dubious.
Two equally unlikely propositions should require equally strong evidence to be believed. Neither dragons nor demons exist, yet you assert that demons are real. Where, then, is the chain of entangled events leading from the state of the universe to the state of your mind? Honest truth-seeking is about dispassionately scrutinizing that chain, as an outsider would, and allowing others to scrutinize, evaluate, and verify it.
I was a Mormon missionary at 19. I used to give people copies of the Book of Mormon, testify of my conviction that it was true, and invite them to read it and pray about it. A few did (Most people in Iowa and Illinois aren’t particularly vulnerable to Mormonism). A few of those people eventually (usually after meeting with us several times) came to feel as I did, that the book was true. I told those people that the feeling they felt was the Holy Spirit, manifesting the truth to them. And if that book is true, I told them, then Joseph Smith must have been a true prophet. And as a true prophet, the church that he established must be the Only True Church, according to Joseph’s revelations and teachings. I would then invite them to be baptized (which was the most important metric in the mission), and to become a member of the LDS church. One of the church’s teachings is that a person can become as God after death (omniscience and omnipotence included). Did the chain of reasoning leading from “I have a feeling that this book is true” justify the belief that “I can become like God”?
You are intelligent and capable of making good rhetorical arguments (from what I have read of your posts in the last week or two). I see you wielding Gendlin, for example, in support of your views. At some level, you’re getting it. But the point of Gendlin is to encourage truth-seekers desiring to cast off comforting false beliefs. It works properly only if you are also willing to invoke Tarski:
Let me not become attached to beliefs I may not want.
Upvoted for being a completely reasonable comment given that you haven’t read through the entirety of a thread that’s gotten totally monstrous.
However, I predict in advance that 1) this evidence is based on words that a man wrote in an ancient book,
Only partly right.
2) I will find this evidence dubious.
Of course you will. If I told you that God himself appeared to me personally and told me everything in the Bible was true, you’d find that dubious, too. Perhaps even more dubious.
Where, then, is the chain of entangled events leading from the state of the universe to the state of your mind?
Already partly in other posts on this thread (actually largely in other posts on this thread), buried somewhere, among something. You’ll forgive me for not wanting to retype multiple pages, I hope.
If I told you that God himself appeared to me personally and told me everything in the Bible was true, you’d find that dubious, too.
Certainly. I’m now curious though: if I told you that God appeared to me personally and told me everything in the Bible was true (either for some specific meaning of “the Bible,” which is of course an ambiguous phrase, or leaving it not further specified), roughly how much confidence would you have that I was telling you the truth?
It would depend on how you said it—as a joke, or as an explanation for why you suddenly believed in God and had decided to convert to Christianity, or as a puzzling experience that you were trying to figure out, or something else—and whether it was April 1 or not, and what you meant by “the Bible” (whether you specified it or not), and how you described God and the vision and your plans for the future.
But I’d take it with a grain of salt. I’d probably investigate further and continue correspondence with you for some time, both to help you as well as I could and to ascertain with more certainty the source of your belief that God came to you (whether he really did or it was a drug-induced hallucination or something). It would not be something I’d bet on either way, at least not just from hearing it said.
That’s a bizarre thing to say. Is their offense evidence that I’m wrong?
No, but generally, applying a derogatory epithet to an entire group of people is seen as rude, unless you back it up with evidence, which in this case you did not do. You just stated it.
So does calling people Cthulhu-worshipers.
In his afterword, EY seems to be saying that the benign actions of his friends and family are inconsistent with the malicious actions of YHVH, as he is depicted in Exodus. This is different from flat-out stating, “all theists are evil” and leaving it at that. EY is offering evidence for his position, and he is also giving credit to theists for being good people despite their religion (as he sees it).
You guys sure seem quick to tell me that my beliefs are offensive, but if I said the same to you, you’d understand why that’s beside the point.
I can’t speak for “you guys”, only for myself; and I personally don’t think that your beliefs are particularly offensive, just the manner in which you’re stating them. It’s kind of like the difference between saying, “Christianity is wrong because Jesus is a fairytale and all Christians are idiots for believing it”, versus, “I believe that Christians are mistaken because of reasons X, Y and Z”.
If you want me to stop believing it, tell me why you think it’s wrong.
Well, personally, I believe its wrong because no gods or demons of any kind exist.
Wiccans, on the other hand, would probably tell you that you’re wrong because Wicca had made them better people, who are more loving, selfless, and considerate of others, which is inconsistent with the expected result of worshiping evil demons. I can’t speak for all Wiccans, obviously; this is just what I’d personally heard some Wiccans say.
I should probably point out at this point that Wiccans (well, at least those whom I’d met), consider this point of view utterly misguided and incredibly offensive.
I object to the use of social politics to overwhelm assertions of fact. Christians and Wiccan’s obviously find each other offensive rather frequently. Both groups (particularly the former) probably also find me offensive. In all cases I say that is their problem.
Now if the Christians were burning the witches I might consider it appropriate to intervene forcefully...
Incidentally I wouldn’t have objected if you responded to “They consort with demons” with “What a load of bullshit. Get a clue!”
I was really objecting to the unsupported assertion; I wouldn’t have minded if AK said, “they consort with demons, and here’s the evidence”.
Incidentally I wouldn’t have objected if you responded to “They consort with demons” with “What a load of bullshit. Get a clue!”
Well, I personally do fully endorse that statement, but the existence of gods and demons is a matter of faith, or of personal experience, and thus whatever evidence or reason I can bring to bear in support of my statement is bound to be unpersuasive.
Oh the innuendo. At this point in the thread, I guess the only way to make the depravity more exquisite would be if you said you enjoy being called a demon’s consort. 0_0
Well if the entities Wiccans worship actually did exist rather than in a lame memetic or trick of psychology way… it is very unlikely they would be benign. Same could be said of many other religions.
Because the religion is evil rather than misguided. Whereas, say, Hinduism, for instance, is just really misguided. See other conversation. Also see Exodus 22:18 and Deuteronomy 18:10.
(I wish I had predicted that this would end this way before I answered that post… then I might not have done so.)
There is nothing that you can claim, nothing that you can demand, nothing that you can take. And as soon as you try to take something as if it were your own—you lose your [innocence]. The angel with the flaming sword stands armed against all selfhood that is small and particular, against the “I” that can say “I want...” “I need...” “I demand...” No individual enters Paradise, only the integrity of the Person.
Only the greatest humility can give us the instinctive delicacy and caution that will prevent us from reaching out for pleasures and satisfactions that we can understand and savor in this darkness. The moment we demand anything for ourselves or even trust in any action of our own to procure a deeper intensification of this pure and serene rest in [God], we defile and dissipate the perfect gift that [He] desires to communicate to us in the silence and repose of our own powers.
If there is one thing we must do it is this: we must realize to the very depths of our being that this is a pure gift of [God] which no desire, no effort and no heroism of ours can do anything to deserve or obtain. There is nothing we can do directly either to procure it or to preserve it or to increase it. Our own activity is for the most part an obstacle to the infusion of this peaceful and pacifying light, with the exception that [God] may demand certain acts and works of us by charity or obedience, and maintain us in deep experimental union with [Him] through them all, by [His] own good pleasure, not by any fidelity of ours.
At best we can dispose ourselves for the reception of this great gift by resting in the heart of our own poverty, keeping our soul as far as possible empty of desires for all the things that please and preoccupy our nature, no matter how pure or sublime they may be in themselves.
And when [God] reveals [Himself] to us in contemplation we must accept [Him] as [He] comes to us, in [His] own obscurity, in [His] own silence, not interrupting [Him] with arguments or words, conceptions or activities that belong to the level of our own tedious and labored existence.
We must respond to [God]’s gifts gladly and freely with thanksgiving, happiness and joy; but in contemplation we thank [Him] less by words than by the serene happiness of silent acceptance. … It is our emptiness in the presence of the abyss of [His] reality, our silence in the presence of [His] infinitely rich silence, our joy in the bosom of the serene darkness in which [His] light holds us absorbed, it is all this that praises [Him]. It is this that causes love of [God] and wonder and adoration to swim up into us like tidal waves out of the depths of that peace, and break upon the shores of our consciousness in a vast, hushed surf of inarticulate praise, praise and glory!
(I might fail to communicate clearly with this comment; if so, my apologies, it’s not purposeful. E.g. normally if I said “Thomistic metaphysical God” I would assume the reader either knew what I meant (were willing to Google “Thomism”, say) or wasn’t worth talking to. I’ll try not to do that kind of thing in this comment as badly as I normally do. I’m also honestly somewhat confused about a lot of Catholic doctrine and so my comment will likely be confused as a result. To make things worse I only feel as if I’m thinking clearly if I can think about things in terms of theoretical computer science, particularly algorithmic probability theory; unfortunately not only is it difficult to translate ideas into those conceptual schemes, those conceptual schemes are themselves flawed (e.g. due to possibilities of hypercomputation and fundamental problems with probability that’ve been unearthed by decision theory). So again, my apologies if the following is unclear.)
I’m going to accept your interpretation at face value, i.e. accept that you’re blessed with a supernatural charisma or something like that. That said, I’m not yet sure I buy the idea that the Thomistic metaphysical God, the sole optimal decision theory, the Form of the Good, the Logos-y thing, has much to do with transhumanly intelligent angels and demons of roughly the sort that folk around here would call superintelligences. (I haven’t yet read the literature on that subject.) In my current state of knowledge if I was getting supernatural signals (which I do, but not as regularly as you do) then I would treat them the same way I’d treat a source of information that claimed to be Chaitin’s constant: skeptically.
In fact it might not be a surface-level analogy to say that God isChaitin’s omega (and is thus a Turing oracle), for they would seem to share a surprising number of properties. Of course Chaitin’s constant isn’t computable, so there’s no algorithmic way to check if the signals you’re getting come from God or from a demon that wants you to think it’s God (at least for claimed bits of Chaitin’s omega that you don’t already know). I believe the Christians have various arguments about states of mind that protect you from demonic influences like that; I haven’t read this article on infallibility yet but I suspect it’s informative.
Because there doesn’t seem to be an algorithmic way of checking if God is really God rather than any other agent that has more bits of Chaitin’s constant than you do, you’re left in a situation where you have to have what is called faith, I think. (I do not understand Aquinas’s arguments about faith yet; I’m not entirely sure I know what it is. I find the ideas counter-intuitive.) I believe that Catholics and maybe other Christians say that conscience is something like a gift from God and that you can trust it, so if your conscience objects to the signals you’re getting then that at least a red flag that you might be being influenced by self-delusion or demons or what have you. But this “conscience” thing seems to be algorithmic in nature (though that’s admittedly quite a contentious point), so if it can check the truth value of the moral information you’re getting supernaturally then you already had those bits of Chaitin’s constant. If your conscience doesn’t say anything about it then it would seem you’re dealing with a situation where you’re supposed/have to have faith. That’s the only way you can do better than an algorithmic approach.
Note that part of the reasons that I think about these things is ’cuz I want my FAI to be able to use bits of Chaitin’s constant that it finds in its environment so as to do uncomputable things it otherwise wouldn’t have. It is an extension of this same personal problem of what to do with information whose origin you can’t algorithmicly verify.
Anyway it’s a sort of awkward situation to be in. It seems natural to assume that this agent is God but I’m not sure if that is acceptable by the standard of (Kant’s weirdly naive version of) Kan’t categorical imperative. I notice that I am very confused about counterfactual states of knowledge and various other things that make thinking about this very difficult.
So um, how do you approach the problem? Er did I even describe the problem in such a way that it’s understandable?
I don’t think I’m smart enough to follow this comment. Edit: but I think you’re wrong about me having some sort of supernatural charisma… I’m pretty sure I haven’t said I’m special, because if I did, I’d be wrong.
Hm, so how would you describe the mechanism behind your sensations then? (Sorry, I’d been primed to interpret your description in light of similar things I’d seen before which I would describe as “supernatural” for lack of a better word.)
...I wasn’t going to come back to say anything, but fine. I’d say it’s God’s doing. Not my own specialness. And I’m not going to continue this conversation further.
Okay, thanks. I didn’t mean to imply ’twas your own “specialness” as such; apologies for being unclear. ETA: Also I’m sorry for anything else? I get the impression I did/said something wrong. So yeah, sorry.
Sensation A felt like there was something on my skin, like dirt or mud, and something squeezing my heart
The dirt just sits there? It doesn’t also squeeze your skin? Or instead throb as if it had been squeezed for a while, but uniformly, not with a tourniquet, and was just released?
Oh and also you should definitely look into using this to help build/invoke FAI/God. E.g. my prospective team has a slot open which you might be perfect for. I’m currently affiliated with Leverage Research who recently received a large donation from Jaan Tallinn, who also supports the Singularity Institute.
I’m not convinced that this is an accurate perception of AspiringKnitter’s comments here so far.
E.g., I don’t think she’s yet claimed both omnipotence and omnibenevolence as attributes of god, so you may be criticizing views she doesn’t hold. If there’s a comment I missed, then ignore me. :)
But at a minimum, I think you misunderstood what she was asking by, “Do you mean that I can’t consider his nonexistence as a counterfactual?” She was asking, by my reading, if you thought she had displayed an actual incapability of thinking that thought.
I don’t think my correct characterization of a fictional being has any bearing on whether or not it exists.
If you’re granted “fictional”, then no. But if you don’t believe in unicorns, you’d better mean “magical horse with a horn” and not “narwhal” or “rhinoceros”.
given that I’ve gotten several downvotes (over seventeen, I think) in the last couple of hours, that’s either the work of someone determined to downvote everything I say or evidence that multiple people think I’m being stupid.
For what it’s worth, the downvotes appear to be correlated with anyone discussingtheology. Not directed at you in particular. At least, that’s my impression.
I do assign a really low prior probability to the existence of lucky socks anywhere
You do realize it might very well mean death to your Bayes score to say or think things like that around an omnipotent being who has a sense of humor, right? This is the sort of Dude Who wrestles with a mortal then names a nation to honor the match just to taunt future wannabe-Platonist Jews about how totally crazy their God is. He is perfectly capable of engineering some lucky socks just so He can make fun of you about it later. He’s that type of Guy. And you do realize that the generalization of Bayes score to decision theoretic contexts with objective morality is actually a direct measure of sinfulness? And that the only reason you’re getting off the hook is that Jesus allegedly managed to have a generalized Bayes score of zero despite being unable to tell a live fig tree from a dead one at a moderate distance and getting all pissed off about it for no immediately discernible reason? Just sayin’, count your blessings.
He is perfectly capable of engineering some lucky socks just so He can make fun of you about it later.
Yes, of course. Why he’d do that, instead of all the other things he could be doing, like creating a lucky hat or sending a prophet to explain the difference between “please don’t be an idiot and quibble over whether it might hurt my feelings if you tell me the truth” and “please be as insulting as possible in your dealings with me”.
And you do realize that the generalization of Bayes score to decision theoretic contexts with objective morality is actually a direct measure of sinfulness?
No, largely because I have no idea what that would even mean. However, if you mean that using good epistemic hygiene is a sin because there’s objective morality, or if you think the objective morality only applies in certain situations which require special epistemology to handle, you’re wrong.
It’s just that now “lucky socks” is the local Schelling point. It’s possible I don’t understand God very well, but I personally am modally afraid of jinxing stuff or setting myself up for dramatic irony. It has to do with how my personal history’s played out. I was mostly just using the socks thing as an example of this larger problem of how epistemology gets harder when there’s a very powerful entity around. I know I have a really hard time predicting the future because I’m used to… “miracles” occurring and helping me out, but I don’t want to take them for granted, but I want to make accurate predictions… And so on. Maybe I’m over-complicating things.
Okay, I can understand that. It can be annoying. However, the standard framework does still apply; you can still use Bayes. It’s like anything else confusing you.
I see what you’re saying and it’s a sensible approximation but I’m not actually sure you can use Bayes in situations with “mutual simulation” like that. Are you familiar with updateless/ambient decision theory perchance?
This post combined with all the comments is perhaps the best place to start, or this post might be an easier introduction to the sorts of problems that Bayes has trouble with. This is the LW wiki hub for decision theory. That said it would take me awhile to explain why I think it’d particularly interest you and how it’s related to things like lucky socks, especially as a lot of the most interesting ideas are still highly speculative. I’d like to write such an explanation at some point but can’t at the moment.
I think this is missing the point: they believe that, but they’re wrong.
...and they can say exactly the same thing about you. It’s exactly that symmetry that defines No True Scotsman. You think you are reading and applying the text correctly, they think they are. It doesn’t help to insist that you’re really right and they’re really wrong, because they can do the same thing.
...and they can say exactly the same thing about you. It’s exactly that symmetry that defines No True Scotsman.
No, No True Scotsman is characterized by moveable goalposts. If you actually do have a definition of True Scotsman that you can point to and won’t change, then you’re not going to fall under this fallacy.
Okay, I’m confused here. Do you believe there are potentially correct and incorrect answers to the question “what does the Bible say that Jesus taught while alive?”
IMO, most Christians unconsciously concentrate on the passages that match their preconceptions, and ignore or explain away the rest. This behavior is ridiculously easy to notice in others, and equally difficult to notice in oneself.
For example, I expect you to ignore or explain away Matthew 10:34: “Do not think that I have come to bring peace to the earth. I have not come to bring peace, but a sword.”
I expect you find Mark 11:12-14 rather bewildering: “On the following day, when they came from Bethany, he was hungry. And seeing in the distance a fig tree in leaf, he went to see if he could find anything on it. When he came to it, he found nothing but leaves, for it was not the season for figs. And he said to it, “May no one ever eat fruit from you again.””
I still think Luke 14:26 has a moderately good explanation behind it, but there’s also a good chance that this is a verse I’m still explaining away, even though I’m not a Christian any more and don’t need to: “If anyone comes to me and does not hate his own father and mother and wife and children and brothers and sisters, yes, and even his own life, he cannot be my disciple.”
The bible was authored by different individuals over the course of time. That’s pretty well established. Those individuals had different motives and goals. IMO, this causes there to actually be competing strains of thought in the bible. People pick out the strains of thought that speak to their preconceived notions. For one last example, I expect you’ll explain James in light of Ephesians, arguing that grace is the main theme. But I think it’s equally valid for someone to explain Ephesians in light of James, arguing that changed behavior is the main theme. These are both valid approaches, in my mind, because contrary to the expectations of Christians (who believe that deep down, James and Ephesians must be saying the same thing), James and Ephesians are actually opposing view points.
Finally, I’ll answer your question: probably not. Not every collection of words has an objective meaning. Restricting yourself to the gospels helps a lot, but I still think they are ambiguous enough to support multiple interpretations.
I suspect that nearly all Christians will agree with your definition (excepting Mormons and JW’s, but I assume you added “divinity” in there to intentionally exclude them)
That isn’t a tacked on addition. It’s the core principle of the entire faith!
The way I see it, there appear to be enough contradictions and ambiguities in the Bible and associated fan work that it’s possible to use it to justify almost anything. (Including slavery.) So it’s hard to tell a priori what’s un-Christian and what isn’t.
Against a Biblical literalist, this would probably be a pretty good attack—if you think a plausible implication of a single verse in the Bible, taken out of context, is an absolute moral justification for a proposed action, then, yes, you can justify pretty much any behavior.
However, this does not seem to be the thrust of AspiringKnitter’s point, nor, even if it were, should we be content to argue against such a rhetorically weak position.
Rather, I think AspiringKnitter is arguing that certain emotions, attitudes, dispositions, etc. are repeated often enough and forcefully enough in the Bible so as to carve out an identifiable cluster in thing-space. A kind, gentle, equalitarian pacifist is (among other things) acting more consistently with the teachings of the literary character of Jesus than a judgmental, aggressive, elitist warrior. Assessing whether someone is acting consistently with the literary character of Jesus’s teachings is an inherently subjective enterprise, but that doesn’t mean that all opinions on the subject are equally valid—there is some content there.
Rather, I think AspiringKnitter is arguing that certain emotions, attitudes, dispositions, etc. are repeated often enough and forcefully enough in the Bible so as to carve out an identifiable cluster in thing-space. A kind, gentle, equalitarian pacifist is (among other things) acting more consistently with the teachings of the literary character of Jesus than a judgmental, aggressive, elitist warrior. Assessing whether someone is acting consistently with the literary character of Jesus’s teachings is an inherently subjective enterprise, but that doesn’t mean that all opinions on the subject are equally valid—there is some content there.
You have a good point there.
Then again, there are plenty of times that Jesus says things to the effect of “Repent sinners, because the end is coming, and God and I are gonna kick your ass if you don’t!”
That is Jesus in half his moods speaking that way. But there’s another Jesus in there. There’s a Jesus who’s just paradoxical and difficult to interpret, a Jesus who tells people to hate their parents. And then there is the Jesus — while he may not be as plausible given how we want to think about Jesus — but he’s there in scripture, coming back amid a host of angels, destined to deal out justice to the sinners of the world. That is the Jesus that fully half of the American electorate is most enamored of at this moment.
The way I see it, there appear to be enough contradictions and ambiguities in the Bible and associated fan work that it’s possible to use it to justify almost anything.
Sacrifice other people’s wives to the devil. That’s almost certainly out.
(Including slavery.)
Yes, that’s a significant moral absurdity to us but no a big deal to the cultures who created the religion or to the texts themselves. (Fairly ambivalent—mostly just supports following whatever is the status quo on the subject.)
So it’s hard to tell a priori what’s un-Christian and what isn’t.
No, it’s really not. There is plenty of grey but there are a whole lot of clear cut rules too. Murdering. Stealing. Grabbing guys by the testicles when they are fighting. All sorts of things.
Your comment seems to be about a general trend and doesn’t rest on slavery itself, correct?
Because if not, I just want to point out that the Bible never says “slavery is good”. It regulates it, ensuring minimal rights for slaves, and assumes it will happen, which is kind of like the rationale behind legalizing drugs. Slaves are commanded in the New Testament to obey their masters, which those telling them to do so explain as being so that the faith doesn’t get a bad reputation. The only time anyone’s told to practice slavery is as punishment for a crime, which is surely no worse than incarceration. At least you’re getting some extra work done.
I assume this doesn’t change your mind because you have other examples in mind?
One thing that struck me about the Bible when I first read it was that Jesus never flat-out said, “look guys, owning people is wrong, don’t do it”. Instead, he (as you pointed out) treats slavery as a basic fact of life, sort of like breathing or language or agriculture. There are a lot of parables in the New Testament which use slavery as a plot device, or as an analogy to illustrate a point, but none that imagine a world without it.
Contrast this to the modern world we live in. To most of us, slavery is almost unthinkable, and we condemn it whenever we see it. As imperfect as we are, we’ve come a long way in the past 2000 years—all of us, even Christians. That’s something to be proud of, IMO.
… I just want to point out that the Bible never says “slavery is good”. It regulates it, ensuring minimal rights for slaves, and assumes it will happen, which is kind of like the rationale behind legalizing drugs.
Hrm, I support legalizing-and-regulating (at least some) drugs and am not in favor of legalizing-and-regulating slavery. I just thought about it for 5 minutes and I still really don’t think they are analogous.
Deciding factor: sane, controlled drug use does not harm anyone (with the possible exception of the user, but they do so willingly). “sane, controlled” slavery would still harm someone against their will (with the exception of voluntary BDSM type relationships, but I’m pretty sure that’s not what we’re talking about).
Haha, I did think of that before making my last comment :)
Answer: in cases where said people are likely to harm others, yes. IMO, society gains more utilons from incarcerating them than the individuals lose from being incarcerated. Otherwise, I’d much rather see more constructive forms of punishment.
OK. So, consider a proposal to force prisoners to perform involuntary labor, in such a way that society gains more utilons from that labor than the individuals lose from being forced to perform it.
Would you support that proposal? Would you label that proposal “slavery”? If not (to either or both), why not?
It would probably depend on the specific proposal. I’d lean more towards “no” the more involuntary and demeaning the task. (I’m not certain my values are consistent here; I haven’t put huge amounts of thought into it.)
Would you label that proposal “slavery”?
Not in the sense I thought we were talking about, which (at least in my mind) included the concept of one individual “owning” another. In a more general sense, I guess yes.
Well, for my own part I would consider a system of involuntary forced labor as good an example of slavery as I can think of… to be told “yes, you have to work at what I tell you to work at, and you have no choice in the matter, but at least I don’t own you” would be bewildering.
That said, I don’t care about the semantics very much. But if the deciding factor in your opposition to legalizing and regulating slavery is that slavery harms someone against their will, then it seems strange to me that who owns whom is relevant here. Is ownership in and of itself a form of harm?
Tabooing “slavery”: “You committed crimes and society has deemed that you will perform task X for Y years as a repayment” seems significantly different (to me) from “You were kidnapped from country Z, sold to plantation owner W and must perform task X for the rest of your life”. I can see arguments for and against the former, but the latter is just plain evil.
This actually understates the degree of difference. Chattel slavery isn’t simply about involuntary labor. It also involves, for example, lacking the autonomy to marry without the consent of one’s master, the arbitrary separation of families and the selling of slaves’ children, etc.
Sure, I agree. But unless the latter is what’s being referred to Biblically, we do seem to have shifted the topic of conversation somewhere along the line.
It’s been awhile since I read it last, but IIRC, the laws regarding slavery in the OT cover individuals captured in a war as well as those sold into slavery to pay a debt.
The only time anyone’s told to practice slavery is as punishment for a crime, which is surely no worse than incarceration. At least you’re getting some extra work done.
In fact, often taking slaves is outright sinful. (Because you’re supposed to genocide them instead! :P)
Well, not the Pope, certainly. He’s a Catholic. But I thought a workable definition of “Christian” was “person who believes in the divinity of Jesus Christ and tries to follow his teachings”, in which case we have a pretty objective test.
Is this a “Catholics aren’t Christian” thing, or just drawing attention to the point that not all Christians are Catholic?
Alright. I’ve encountered some people of the former opinion, and while it really didn’t square with the impression you’ve given thusfar I would have been interested to see your reasoning if you’d in fact held that view.
Hmm, so apparently, looking up religious conversion testimonies on the intertubes is more difficult than I thought, because all the top search results lead to sites that basically say, “here’s why religion X is wrong and my own religion Y is the best thing since sliced bread”. That said, here’s a random compilation of Chrtistianity-to-Islam conversion testimonials. You can also check out the daily “Why am I an Atheist” feature on Pharyngula, but be advised that this site is quite a bit more angry than Less Wrong, so the posts may not be representative.
BTW, I’m not endorsing any of these testimonials, I’m just pointing out that they do exist.
NO. That takes a BIG NO. Severity of mental illness is NOT correlated with violence
Well, I brought that up because I know of at least one mental illness-related violent incident in my own extended family. That said, you are probably right in saying that schizophrenia and violence are not strongly correlated. However, note that violence against others was just one of the negative effects I’d brought up; existential risk to one’s self was another.
I think they key disagreement we’re having is along the following lines: is it better to believe in something that’s true, or in something that’s probably false, but has a positive effect on you as a person ? I believe that the second choice will actually result in a lower utility. Am I correct in thinking that you disagree ? If so, I can elaborate on my position.
Okay, so I mean, if you think you only want to fulfill your own selfish desires...
I don’t think there are many people (outside of upper management, maybe, heh), of any religious denomination or lack thereof, who wake up every morning and say to themselves, “man, I really want to fulfill some selfish desires today, and other people can go suck it”. Though, in a trivial sense, I suppose that one can interpret wanting to be nice to people as a selfish desire, as well...
Well, not the Pope, certainly. He’s a Catholic.
You keep asserting things like this, but to an atheist, or an adherent of any faith other than yours, these assertions are pretty close to null statements—unless you can back them up with some evidence that is independent of faith.
But I thought a workable definition of “Christian” was “person who believes in the divinity of Jesus Christ and tries to follow his teachings”
Every single person (plus or minus epsilon) who calls oneself “Christian” claims to “follow Jesus’s teachings”; but all Christians disagree on what “following Jesus’s teachings” actually means, so your test is not objective. All those Christians who want to persecute gay people, ban abortion, teach Creationism in schools, or even merely follow the Pope and venerate Mary—all of them believe that they are doing what Jesus would’ve wanted them to do, and they can quote Bible verses to prove it.
Compare it with a relevant quote from the Bible, which has been placed in different places in different versions...
Some Christians claim that this story is a later addition to the Bible and therefore non-authoritative. I should also mention that both YHVH and, to a lesser extent, Jesus, did some pretty intolerant things; such as committing wholesale genocide, whipping people, condemning people, authorizing slavery, etc. The Bible is quite a large book...
That said, here’s a random compilation of Chrtistianity-to-Islam conversion testimonials. You can also check out the daily “Why am I an Atheist” feature on Pharyngula, but be advised that this site is quite a bit more angry than Less Wrong, so the posts may not be representative.
Thank you.
Well, I brought that up because I know of at least one mental illness-related violent incident in my own extended family.
I’m sorry.
I think they key disagreement we’re having is along the following lines: is it better to believe in something that’s true, or in something that’s probably false, but has a positive effect on you as a person ?
No, I don’t think that’s true, because it’s better to believe what’s true.
I believe that the second choice will actually result in a lower utility.
So do I, because of the utility I assign to being right.
Am I correct in thinking that you disagree ?
No.
Every single person (plus or minus epsilon) who calls oneself “Christian” claims to “follow Jesus’s teachings”; but all Christians disagree on what “following Jesus’s teachings” actually means, so your test is not objective. All those Christians who want to persecute gay people, ban abortion, teach Creationism in schools, or even merely follow the Pope and venerate Mary—all of them believe that they are doing what Jesus would’ve wanted them to do, and they can quote Bible verses to prove it.
Suppose, hypothetically, that current LessWrong trends of adding rituals and treating EY as to some extent above others continue. And then suppose that decades or centuries down the line, we haven’t got transhumanism, but we HAVE got LessWrongians who now argue about what EY really meant. And some of them disagree with each other, and others outside their community just raise their eyebrows and think man, LessWrongians are such a weird cult. Would it be correct, at least, to say that there’s a correct answer to the question “who is following Eliezer Yudkowsky’s teachings?” That there’s a yes or no answer to the question “did EY advocate prisons just because he failed to speak out against them?” Or to the question “would he have disapproved of people being irrational?” If not, I’ll admit you’re being self-consistent, at least.
Some Christians claim that this story is a later addition to the Bible and therefore non-authoritative.
And that claim should be settled by studying the relevant history.
EDIT: oh, and I forgot to mention that one doesn’t have to actually think “I want to go around fulfilling my selfish desires” so much as just have a utility function that values only one’s own comfort and not other people’s.
No, I don’t think that’s true, because it’s better to believe what’s true.
This statement appears to contradict your earlier statements that a). It’s better to live with the perception-altering symptoms of schizophrenia, than to replace those symptoms with depression and other side-effects, and b). You determine the nature of every “gut feeling” (i.e., whether it is divine or internal) by using multiple criteria, one of which is, “would I be better off as a person if this feeling was, in fact, divine”.
Suppose, hypothetically, that current LessWrong trends of adding rituals and treating EY as to some extent above others continue.
I hope not, I think people are engaging in more than enough EY-worship as it is, but that’s beside the point...
And then suppose that decades or centuries down the line, we haven’t got transhumanism, but we HAVE got LessWrongians who now argue about what EY really meant… Would it be correct, at least, to say that there’s a correct answer to the question “who is following Eliezer Yudkowsky’s teachings?”
Since we know today that EY actually existed, and what he talked about, then yes. However, this won’t be terribly relevant in the distant future, for several reasons:
Even though everyone would have an answer to this question, it is far from guaranteed that more than zero answers would be correct, because it’s entirely possible that no Yudkowskian sect would have the right answer.
Our descendants likely won’t have access to EY’s original texts, but to Swahili translations from garbled Chinese transcriptions, or something; it’s possible that the translations would reflect the translators’ preferences more than EY’s original intent. In this case, EY’s original teachings would be rendered effectively inaccessible, and thus the question would become unanswerable.
Unlike us here in the past, our future descendants won’t have any direct evidence of EY’s existence. They may have so little evidence, in fact, that they may be entirely justified in concluding that EY was a fictional character, like James Bond or Harry Potter. I’m not sure if fictional characters can have “teachings” or not.
That there’s a yes or no answer to the question “did EY advocate prisons just because he failed to speak out against them?”
This question is not analogous, because, unlike the characters on the OT and NT, EY does not make a habit of frequently using prisons as the basis for his parables, nor does EY claim to be any kind of a moral authority. That said, if EY did say these things, and if prisons were found to be extremely immoral in the future—then our descendants would be entirely justified in saying that EY’s morality was far inferior to their own.
And that claim should be settled by studying the relevant history.
I doubt whether there exist any reasonably fresh first-hand accounts of Jesus’s daily life (assuming, of course, that Jesus existed at all). If such accounts did exist, they did not survive the millennia that passed since then. Thus, it would be very difficult to determine what Jesus did and did not do—especially given the fact that we don’t have enough secular evidence to even conclude that he existed with any kind of certainty.
This statement appears to contradict your earlier statements that
a). It’s better to live with the perception-altering symptoms of schizophrenia, than to replace those symptoms with depression and other side-effects,
I want to say I don’t know why you think I made that statement, but I do know, and it’s because you don’t understand what I said. I said that given that those drugs fix the psychosis less than half the time, that almost ten percent of cases spontaneously recover anyway, that the entire rest of the utility function might take overwhelming amounts of disutility from side-effects including permanent disfiguring tics, a type of unfixable restlessness that isn’t helped by fidgeting and usually causes great suffering, greater risk of diseases, lack of caring about anything, mental fog (which will definitely impair your ability to find the truth), and psychosis (not even kidding, that’s one of the side-effects of antipsychotics), and given that it can lead to a curtailing of one’s civil liberties to be diagnosed, it might not be worth it. Look, there’s this moral theory called utilitarianism where you can have one bad thing happen and still think it’s worth it because the alternative is worse, and it doesn’t just have to work for morals. It works for anything; you can’t just say “X is bad, fix X at all cost”. You have to be sure it’s not actually the best state of affairs first. Something can be both appalling and the best possible choice, and my utility function isn’t as simple as you seem to think it is. I think there are things of value besides just having perfectly clear perception.
Our descendants likely won’t have access to EY’s original texts, but to Swahili translations from garbled Chinese transcriptions, or something;
This is the internet. Nothing anyone says on the internet is ever going away, even if some of us really wish it could. /nitpick
b). You determine the nature of every “gut feeling” (i.e., whether it is divine or internal) by using multiple criteria, one of which is, “would I be better off as a person if this feeling was, in fact, divine”.
I really want to throw up my hands here and say “but I’ve explained this MULTIPLE TIMES, you are BEING AN IDIOT” but I remember the illusion of transparency. And that you haven’t understood. And that you didn’t make a deliberate decision to annoy me. But I’m still annoyed. I STILL want to call you an idiot, even though I know I haven’t phrased something correctly and I should explain again. That doesn’t even sound like what I believe or what I (thought I) said. (Maybe that’s how it came out. Ugh.)
Why is communication so difficult? Why doesn’t knowing that someone’s not doing it on purpose matter? It’s the sort of thing that you’d think would actually affect my feelings.
This is the internet. Nothing anyone says on the internet is ever going away, even if some of us really wish it could. /nitpick
You would be surprised… If it weren’t for the internet archive much information would have already been lost. Some modern websites are starting to use web design techniques (ajax-loaded content) that break such archive services.
I really want to throw up my hands here and say “but I’ve explained this MULTIPLE TIMES, you are BEING AN IDIOT” but I remember the illusion of transparency.
One option would be to reply with a pointer to your previous comment.
I see you’ve used the link syntax within a comment—this web site supports permalinks to comments as well.
At least you wouldn’t be forced to repeat yourself.
But since I obviously explained it wrong, what good does it do to remind him of where I explained it? I’ve used the wrong words, I need to find new ones. Ugh.
Best wishes. Was your previous explanation earlier in your interchange with Bugmaster?
If so, I agree that Bugmaster would have read your explanation, and that pointing to it
wouldn’t help (I sympathize). If, however, your previous explanation was in response to
another lesswrongian, it is possible that Bugmaster missed it, in which case a pointer might
help. I’ve been following your comments, but I’m sure I’ve missed some of them.
(I just came back from vacation, sorry for the late reply, and happy New Year ! Also, Merry Christmas if you are so inclined :-) )
Firstly, I operate by Crocker’s Rules, so you can call me anything you want and I won’t mind.
It works for anything; you can’t just say “X is bad, fix X at all cost”. You have to be sure it’s not actually the best state of affairs first.
I agree with you completely regarding utilitarianism (although in this case we’re not talking about the moral theory, just the approach in general). All I was saying is that IMO the utility one places on believing things that are likely to be actually true should, IMO, be extremely high—and possibly higher than the utility you assign to this feature. But “extremely high” does not mean “infinite”, of course, and it’s entirely possible that, in some cases, the disutility from all the side-effects will not be worth the utility gain—especially if the side-effects are preventing you from believing true things anyway (f.ex. “mental fog”, psychosis, depression, etc.).
That said, if I personally was seeing visions or hearing voices, I would be willing (assuming I remained reasonably rational, of course) to risk a very large disutility even for a less than 50% chance of fixing the problem. If I can’t trust my senses (or, indeed, my thoughts), then my ability to correctly evaluate my utility is greatly diminished. I could be thinking that everything is just great, while in reality I was hurting myself or others, and I’d be none the wiser. Of course, I could also be just great in reality, as well; but given the way this universe works, this is unlikely.
This is the internet. Nothing anyone says on the internet is ever going away, even if some of us really wish it could.
Data on the Internet is less permanent than many people think, IMO, but this is probably beside the point; I was making an analogy to the Bible, which was written in the days before the Internet, but (sadly) after the days of giant stone steles. Besides, the way things are going, it’s not out of the question that future versions of the Internet would all be written in Chinese...
Why is communication so difficult? Why doesn’t knowing that someone’s not doing it on purpose matter?
I think this is because you possess religious faith, which I have never experienced, and thus I am unable to evaluate what you say in the same frame of reference. Or it could be because I’m just obtuse. Or a bit of both.
Besides, the way things are going, it’s not out of the question that future versions of the Internet would all be written in Chinese...
I don’t think so. The popularity of the English language has gained momentum such that even if its original causes (the economic status of the US) ceased, it would go on for quite a while. Chinese hasn’t. See http://www.andaman.org/BOOK/reprints/weber/rep-weber.htm (It was written a decade and a half ago, but I don’t think the situation is significantly qualitatively different for English and Chinese in ways which couldn’t have been predicted back then.) I think English is going to remain the main international language for at least 30 more years, unless some major catastrophe occurs (where by major I mean ‘killing at least 5% of the world human population’).
You keep asserting things like this, but to an atheist, or an adherent of any faith other than yours, these assertions are pretty close to null statements—unless you can back them up with some evidence that is independent of faith.
There is a bit of ambiguity here, but I asked after it and apparently the more strident interpretation was not intended. The position that the Pope doesn’t determine who is Christian because the Pope is Catholic and therefore doesn’t speak with authority regarding those Christians who are not Catholic seems uncontroversial, internally consistent, and not privileging any particular view.
“Better person” here means “person who maximizes average utility better”.
Understood, though I was confused for a moment there. When other people say “better person”, they usually mean something like “a person who is more helpful and kinder to others”, not merely “a happier person”, though obviously those categories do overlap.
I think that by “maximizes average utility” AspiringKnitter meant utility averaged over every human being—so helpfulness and kindness to others is by necessity included.
Since a utility function is only defined up to affine transformations with positive scale factor, what does it mean to sum several utility functions together? (Sure someone has already thought about that, but I can’t think of anything sensible.)
I haven’t studied schizophrenia in any detail, but wouldn’t a person suffering from it also have a skewed subjective perception of what “being miserable” is ?
Misery is a subjective experience. The schizophrenic patients I work with describe feeling a lot of distress because of their symptoms, and their voices usually tell them frightening things. So I would expect a person hearing voices due to psychosis to be more distressed than someone hearing God.
That said, I was less happy when I believed in God because I felt constantly that I had unmet obligations to him.
If the goal is to arrive at the truth no matter one’s background or extenuating circumstances, I don’t think this list quite does the trick. You want a list of steps such that, if a Muslim generated a list using the same cognitive algorithm, it would lead them to the same conclusion your list will lead you to.
From this perspective, #2 is extremely problematic; it assumes the thing you’re trying to establish from the spiritual experience (the veracity of Christianity). If a muslim wrote this step, it’d look totally different, as it would for any religion. (You do hint at this, props for that.) This step will only get you to the truth if you start out already having the truth.
#7 is problematic from a different perspective; well-being and truth-knowledge are not connected on a fundamental level, most noticeably when people around you don’t know the same things you know. For reference, see Gallileo.
Also, my own thought: if we both agree that your brain can generate surprisingly coherent stuff while dreaming, then it seems reasonable to suppose the brain has machinery capable of the process. So my own null hypothesis is that that machinery can get triggered in ways which produce the content of spiritual experiences.
God has been known to speak to people through dreams, visions and gut feelings.
In addition to your discussion with APMason:
When you have a gut feeling, how do you know whether this is (most likely) a regular gut feeling, or whether this is (most likely) God speaking to you ? Gut feelings are different from visions (and possibly dreams), since even perfectly sane and healthy people have them all the time.
*There’s a joke I can’t find about some Talmudic scholars who are arguing. They ask God, a voice booms out from the heavens which one is right, and the others fail to update.
I can’t find the source right now, but AFAIK this isn’t merely a joke, but a parable from somewhere in the Talmud. One of the rabbis wants to build an oven in a way that’s proscribed by the Law (because it’d be more convenient for some engineering reason that I forget), and the other rabbis are citing the Law at him to explain why this is wrong. The point of the parable is that the Law is paramount; not even God has the power to break it (to say nothing of mere mortal rabbis). The theme of rules and laws being ironclad is a trope of Judaism that does not, AFAIK, exist in Christianity.
In the Talmudic story, the voice of God makes a claim about the proper interpretation of the Law, but it is dismissed because the interpretation of the Law lies in the domain of Men, where it is bound by certain peculiar hermeneutics. The point is that Halacha does not flow from a single divine authority, but is produced by a legal tradition.
And that’s not what I’m thinking of. It’s probably a joke about the parable, though. But I distinctly recall it NOT having a moral and being on the internet on a site of Jewish jokes.
Bugmaster: Well, go with your gut either way, since it’s probably right.
It could be something really surprising to you that you don’t think makes sense or is true, just as one example. Of course, if not, I can’t think of a good way off the top of my head.
Well, go with your gut either way, since it’s probably right.
Hmm, are you saying that going with your gut is most often the right choice ? Perhaps your gut is smarter than mine, since I can recall many examples from my own life when trusting my intuitions turned out to be a bad idea. Research likewise shows that human intuition often produces wrong answers to important questions; what we call “critical thinking” today is largely a collection of techniques that help people overcome their intuitive biases. Nowadays, whenever I get a gut feeling about something, I try to make the effort to double-check it in a more systematic fashion, just to make sure (excluding exceptional situations such as “I feel like there might be a tiger in that bush”, of course).
I’m claiming that going with your gut instinct usually produces good results, and when time is limited produces the best results available unless there’s a very simple bias involved and an equally simple correction to fix it.
Sometimes I feel my gut is smarter than my explicit reasoning, as I sometimes, when I have to make a decision in a very limited time, I make a choice which, five seconds later, I can’t fully make sense of, but on further reflection I realize it was indeed the most reasonable possible choice after all. (There might some kind of bias I fail to fully correct for, though.)
If you’ll allow me to butt into this conversation, I have to say that on the assumption that consciousness and identity depend not on algorithms executed by the brain (and which could be executed just as well by transistors), but on a certain special identity attached to your body which cannot be transferred to another—granting that premise—it seems perfectly rational to not want to change hardware. But when you say:
Plus it’s good practice, since our justice system won’t decide personhood by asking God...
do you mean that you would like the justice system to decide personhood by asking God?
Our justice system should put in safeguards against what happens if we accidentally appoint ungodly people. That’s the intuition behind deontological morality (some people will cheat or not understand, so we have bureaucracy instead) and it’s the idea behind most laws. The reasoning here is that judges are human. This would of course be different in a theocracy ruled by Jesus, which some Christians (I’m literally so tired right now I can’t remember if this is true or just something some believe, or where it comes from) believe will happen for a thousand years between the tribulation and the end of the world.
What do you have in mind when you say “godly people”?
The qualifications I want for judges are honest, intelligent, benevolent, commonsensical, and conscientious. (Knowing the law is implied by the other qualities since an intelligent, benevolent, conscientious person wouldn’t take a job as a judge without knowing the law.)
Godly isn’t on the list because I wouldn’t trust judges who were chosen for godliness to be fair to non-godly people.
Godly isn’t on the list because I wouldn’t trust judges who were chosen for godliness to be fair to non-godly people.
Then you’re using a different definition of “godly” from the one I use.
The qualifications I want for judges are honest, intelligent, benevolent, commonsensical, and conscientious.
Part but not all of my definition of “godly”. (Actually, intelligent and commonsensical aren’t part of it. So maybe judges should be godly, intelligent and commonsensical.)
Our justice system should put in safeguards against what happens if we accidentally appoint ungodly people.
Currently, we still have some safeguards in place that ensure that we don’t accidentally appoint godly people. Our First Amendment, for example, is one of such safeguards, and I believe it to be a very good thing.
The problem with using religion as a basis for public policy is that there’s no way to know (or even estimate), objectively, which religion is right. For example, would you be comfortable if our country officially adopted Sharia law, put Muslim clerics in all the key government positions, and mandated that Islam be taught in schools (*) ? Most Christians would answer “no”, but why not ? Is it because Christianity is the one true religion, whereas Islam is not ? But Muslims say the exact same thing, only in reverse; and so does every other major religion, and there’s no way to know whether any of them are right (other than after death, I suppose, which isn’t very useful). Meanwhile, there are atheists such as myself who believe that the very idea of religion is deeply flawed; where do we fit into this proposed theocracy ?
This is why I believe that decoupling religion from government was an excellent move. If the government is entirely secular, then every person is free to worship the god or gods they believe in, and no person has the right to impose their faith onto others. This system of government protects everyone, Christians included.
(*) I realize that the chances of this actually happening are pretty much nonexistent, but it’s still a useful hypothetical example.
If the government is entirely secular, then every person is free to worship the god or gods they believe in, and no person has the right to impose their faith onto others.
I don’t think that one can say a government is entirely secular, nor can it reasonably be an ideal endlessly striven for. A political apparatus would have to determine what is and isn’t permissible, and any line drawn would be arbitrary.
Suppose a law is passed by a coalition of theist and environmentalist politicians banning eating whales, where the theists think it is wrong for people (in that country) to eat whales as a matter of religious law. A court deciding whether or not the law was impermissibly religiously motivated not only has to try and divine the motives of those involved in passing the law, it would have to decide what probability of passing it would have had, what to counterfactually replace the theists’ values with, etc. and then compare that to some standard.
Currently, we still have some safeguards in place that ensure that we don’t accidentally appoint godly people. Our First Amendment, for example, is one of such safeguards, and I believe it to be a very good thing.
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Which part of this is intended to prevent the appointment of godly judges? The guarantee that we won’t go killing people for heresy? Or the guarantee that you have freedom of speech and the freedom to tell the government you’d like it to do a better job on something?
Unless by “godly” you mean “fanatical extremists who approve of terrorism and/or fail to understand why theocracies only work in theory and not in practice”. In which case I agree, but that wasn’t my definition of that word.
For example, would you be comfortable if our country officially adopted Sharia law, put Muslim clerics in all the key government positions, and mandated that Islam be taught in schools (*) ?
No. You predict correctly.
Most Christians would answer “no”, but why not ? Is it because Christianity is the one true religion, whereas Islam is not ?
Yes. And because I expect Sharia law to directly impinge on the freedoms that I rightly enjoy in secular society and would also enjoy if godly and sensible people (here meaning moral Christians who have a basic grasp of history, human nature, politics and rationality) were running things. And because I disapprove of female circumcision and the death penalty for gays. And because I think all the clothing I’d have to wear would be uncomfortable, I don’t like gloves, black is nice but summer in California calls for something other than head-to-toe covering in all black, I prefer to dress practically and I have a male friend I’d like to not be separated from.
Some of the general nature of these issues showed up in medieval Europe. That’s because they’re humans-with-authority issues, not just issues with Islam. (At least, not with Islam alone.)
But Muslims say the exact same thing, only in reverse; and so does every other major religion,
Yes, but they’re wrong.
and there’s no way to know whether any of them are right (other than after death, I suppose, which isn’t very useful)
We can test what they claim is true. For instance, Jehovah’s Witnesses think it’ll be only a very short time until the end of the world, too short for political involvement to be useful (I think). So if we wait and the world doesn’t end and we ascertain that had more or fewer people been involved in whatever ways we could have had outcomes that would have been better or worse, we can disprove a tenet of that sect.
Meanwhile, there are atheists such as myself who believe that the very idea of religion is deeply flawed; where do we fit into this proposed theocracy ?
The one with the Muslims? Probably as corpses. Are you under the impression that I’ve suggested a Christian theocracy instead?
This is why I believe that decoupling religion from government was an excellent move.
Concur. I don’t want our country hobbled by Baptists and Catholics arguing with each other.
If the government is entirely secular, then every person is free to worship the god or gods they believe in,
Of course, the government could mandate atheism, or allow people to identify as whatever while prohibiting them from doing everything their religion calls for (distributing Gideon Bibles at schools, wearing a hijab in public, whatever). Social pressure is also a factor, one which made for an oppressive, theocraticish early America even though we had the First Amendment.
and no person has the right to impose their faith onto others. This system of government protects everyone, Christians included.
When it works, it really works. You’ll find no disagreement from anyone with a modicum of sense.
Unless by “godly” you mean “fanatical extremists who approve of terrorism and/or fail to understand why theocracies only work in theory and not in practice”.
Understood. When most Christian say things like, “I wish our elected official were more godly”, they usually mean, “I really wish we lived in a Christian theocracy”, but I see now that you’re not one of these people. In this case, would you vote for an atheist and thus against a Christian, if you thought that the atheist candidate’s policies were more beneficial to society than his Christian rival’s ?
Yes, but they’re wrong.
Funny, that’s what they say about you...
We can test what they claim is true.
This is an excellent idea, but it’s not always practical; otherwise, most people would be following the same religion by now. For example, you mentioned that you don’t want to wear uncomfortable clothing or be separated from your male friend (to use some of the milder examples). Some Muslims, however (as well as some Christians), believe that doing these things is not merely a bad idea, but a mortal sin, a direct affront to their god (who, according to them, is the one true god), which condemns the sinner to a fiery hell after death. How would you test whether this claim was true or not ?
Of course, the government could mandate atheism
Even though I’m an atheist, I believe this would be a terrible idea.
When it works, it really works. You’ll find no disagreement from anyone with a modicum of sense.
Well, this all depends on what you believe in. For example, some theists believe (or at least claim to believe) that certain actions—such as wearing the wrong kind of clothes, or marrying the wrong kinds of people, etc. -- are mortal sins that provoke God’s wrath. And when God’s wrath is made manifest, it affects the entire nation, not just the individual sinners (there are plenty of Bible verses that seem to be saying the same thing).
If this belief is true, then stopping people from wearing sinful clothing or marrying in a sinful way or whatever is not merely a sensible thing to do, but pretty much a moral imperative. This is why (as far as I understand) some Christians are trying to turn our government into a Christian theocracy: they genuinely believe that it is their moral duty to do so. Since their beliefs are ultimately based on faith, they are not open to persuasion; and this is why I personally love the idea of a secular government.
In this case, would you vote for an atheist and thus against a Christian, if you thought that the atheist candidate’s policies were more beneficial to society than his Christian rival’s ?
Possibly. Depends on how much better, how I expected both candidates’ policies to change and how electable I considered them both.
For example, you mentioned that you don’t want to wear uncomfortable clothing or be separated from your male friend (to use some of the milder examples). Some Muslims, however (as well as some Christians), believe that doing these things is not merely a bad idea, but a mortal sin, a direct affront to their god (who, according to them, is the one true god), which condemns the sinner to a fiery hell after death. How would you test whether this claim was true or not ?
I wouldn’t. But I would test accompanying claims. For this particular example, I can’t rule out the possibility of ending up getting sent to hell for this until I die. However, having heard what supporters of those policies say, I know that most Muslims who support this sort of idea of modest clothing claim that it causes women to be more respected, causes men exposed only to this kind of woman to be less lustful and some even claim it lowers the prevalence of rape. As I receive an optimal level of respect at the moment, I find the first claim implausible. Men in countries where it happens are more sexually frustrated and more likely to end up blowing themselves up. Countries imposing these sorts of standards harm women even more than they harm men. So that’s implausible. And rape occurs less in cultures with more unsexualized nudity, which would indicate only a modest protective effect or none at all, or could even indicate that more covering up causes more rape.
It’s not 100% out of the question that the universe has an evil god who orders people to do stupid things for his own amusement.
Funny, that’s what they say about you...
I say you’re wrong about atheism, but you don’t consider that strong evidence in favor of Christianity.
For example, some theists believe (or at least claim to believe) that certain actions—such as wearing the wrong kind of clothes, or marrying the wrong kinds of people, etc. -- are mortal sins that provoke God’s wrath. And when God’s wrath is made manifest, it affects the entire nation, not just the individual sinners (there are plenty of Bible verses that seem to be saying the same thing).
Possibly. Depends on how much better, how I expected both candidates’ policies to change and how electable I considered them both.
That’s perfectly reasonable, but see my comments below.
For this particular example, I can’t rule out the possibility of ending up getting sent to hell for this until I die. However, having heard what supporters of those policies say, I know that most Muslims who support this sort of idea of modest clothing claim that it causes women to be more respected...
Ok, so you’ve listed a bunch of empirically verifiable criteria, and evaluated them. This approach makes sense to me… but… it sounds to me like you’re making your political (“atheist politician vs. Christian politician”) and moral (“should I wear a burqa”) choices based primarily (or perhaps even entirely) on secular reasoning. You would support the politician who will implement the best policies (and who stands a chance of being elected at all), regardless of his religion; and you would oppose social polices that demonstrably make people unhappy—in this life, not the next. So, where does “godliness” come in ?
It’s not 100% out of the question that the universe has an evil god who orders people to do stupid things for his own amusement.
I agree, but then, I don’t have faith to inform me of any competing gods’ existence. I imagine that if I had faith in a non-evil Christian god, who is also the only god, I’d peg the probability of the evil god’s existence at exactly 0%. But it’s possible that I’m misunderstanding what faith feels like “from the inside”.
I’m under the impression that you’ve just endorsed a legal system which safeguards against the consequences of appointing judges who don’t agree with Christianity’s model of right and wrong, but which doesn’t safeguard against the consequences appointing judges who don’t agree with other religions’ models of right and wrong.
Am I mistaken?
If you are endorsing that, then yes, I think you’ve endorsed a violation of the Establishment Clause of the First Amendment as generally interpreted.
Regardless, I absolutely do endorse testing the claims of various religions (and non-religions), and only acting on the basis of a claim insofar as we have demonstrable evidence for that claim.
But Muslims say the exact same thing, only in reverse; and so does every other major religion,
Yes, but they’re wrong.
and no person has the right to impose their faith onto others. This system of government protects everyone, Christians included.
When it works, it really works. You’ll find no disagreement from anyone with a modicum of sense.
These two quotes are an interesting contrast to me. I think the Enlightenment concept of tolerance is an essential principle of just government. But you believe that there is a right answer on the religion question. Why does tolerance make any sense to you?
<Just to be clear, abandoning tolerance does not logically imply bringing back the Inquisition (or its Protestant equivalent),
How not? Hasn’t it basically always resulted in either cruelty or separatism? The former is harmful to others, the latter dangerous to those who practice it. Are we defining tolerance differently? Tolerance makes sense to me for the same reason that if someone came up to me and said that the moon was made of green cheese because Omega said so, and then I ended up running into a whole bunch of people who said so and rarely listened to sense, I would not favor laws facilitating killing them. And if they said that it would be morally wrong for them to say otherwise, I would not favor causing them distress by forcing them to say things they think are wrong. Even though it makes no sense, I would avoid antagonizing them because I generally believe in not harming or antagonizing people.
But you believe that there is a right answer on the religion question.
Don’t you? If you’re an atheist, don’t you believe that’s the right answer?
It seems logically possible to me that government could favor a particular sect without necessarily engaging in immoral acts. For the favored sect, the government could pay the salary of pastors and the construction costs of churches. Education standards (even for home-schooled children) could include knowledge of particular theological positions of the sect. Membership could be a plus-factor in applying for government licenses or government employment.
As you note, human history strongly suggests government favoritism wouldn’t stop there and would proceed to immoral acts. But it is conceivable, right? (And if we could edit out in-group bias, I think that government favoritism is the rational response to the existence of an objectively true moral proposition).
And you are correct that I used imprecise language about knowing the right answer on religion.
It is conceivable. I consider it unlikely. It would probably be the beginning of a slippery slope, so I reject it on the grounds that it will lead to bad things.
Plus I wouldn’t know which sect it should be, but we can rule out Catholicism, which will really make them angry, and all unfavored sects will grumble. (Some Baptists believe all Catholics are a prophesied evil. Try compromising between THEM.) And, you know, this very idea is what prompted one of the two genocides that brought part of my family to the New World.
And the government could ask favors of the sect in return for these favors, corrupting its theology.
… a theocracy ruled by Jesus, which some Christians (I’m literally so tired right now I can’t remember if this is true or just something some believe, or where it comes from) believe will happen for a thousand years between the tribulation and the end of the world.
I’m literally so tired right now I can’t remember if this is true or just something some believe, or where it comes from
You are probably thinking of premillenialism, which is a fairly common belief among Protestant denominations (particularly evangelical ones), but not a universal one. Catholic and Orthodox churches both reject it. As best I can tell it’s fundamentally a Christian descendant of the Jewish messianic teachings, which are pretty weakly supported textually but tend to imply a messiah as temporal ruler; since Christianity already has its messiah, this in turn implies a second coming well before the final judgment and the destruction of the world. Eschatology in general tends to be pretty varied and speculative as theology goes, though.
On the flip side, your (and mine, and everyone else’s) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won’t necessarily be a step down.
I entirely agree with you that various forms of mistaken and fraudulent identity, where entities falsely claim to be me or are falsely believed to be me, are problematic. Indeed, there are versions of that happening right now in the real world, and they are a problem. (That last part doesn’t have much to do with AI, of course.)
I agree that people being modified without their consent is problematic. That said, it’s not clear to me that I would necessarily be more subject to being modified without my consent as a computer than I am as whatever I am now—I mean, there’s already a near-infinite assortment of things that can modify me without my consent, and there do exist techniques for making accidental/malicious modification of computers difficult, or at least reversible. (I would really have appreciated error-correction algorithms after my stroke, for example, or at least the ability to restore my mind from backup afterwards. So the idea that the kind of thing I am right now is the ne plus ultra of unmodifiability rings false for me.)
Approving of something in principle doesn’t necessarily translate into believing it should be mandatory regardless of the subject’s feelings on the matter, or even into advocating it in any particular case. I’d be surprised if EY in particular ever made such an argument, given the attitude toward self-determination expressed in his Metaethics and Fun Theory sequences; I am admittedly extrapolating from only tangentially related data, though. Not sure I’ve ever read anything of his dealing with the ethics of brain simulation, aside from the specific and rather unusual case given in Nonperson Predicates and related articles.
Robin Hanson’s stance is a little different; his emverse is well-known, but as best I can tell he’s founding it on grounds of economic determinism rather than ethics. I’m hardly an expert on the subject, nor an unbiased observer (from what I’ve read I think he’s privileging the hypothesis, among other things), but everything of his that I’ve read on the subject parses much better as a Cold Equations sort of deal than as an ethical imperative.
I’m sure you’re pro self determination right? Or are you? One of the things that pushed me away from religion in the beginning was there was no space for self determination(not that there is much from a natural perspective), the idea of being owned is not nice one to me. Some of us don’t want watch ourselves rot in a very short space of time.
Um, according to the Bible, the Abrahamic God’s supposed to have done some pretty awful things to people on purpose, or directed humans to do such things. It’s hard to imagine anything more like the definition of a petty tyrant than wiping out nearly all of humanity because they didn’t act as expected; exhorting people to go wipe out other cultures, legislating victim blame into ethics around rape, sending actual fragging bears to mutilate and kill irreverent children?
I’m not the sort of person who assumes Christians are inherently bad people, but it’s a serious point of discomfort with me that some nontrivial portion of humanity believes that a being answering to that description and those actions a) exists and b) is any kind of moral authority.
If a human did that stuff, they’d be described as whimsical tyrants at the most charitable. Why’s God supposed to be different?
While I agree with some of your other points, I’m not sure about this:
It’s hard to imagine anything more like the definition of a petty tyrant than wiping out nearly all of humanity because they didn’t act as expected
We shouldn’t be too harsh until we are faced with either deleting a potentially self-improving AI that is not provably friendly or risking the destruction of not just our species but the destruction of all that we value in the universe.
I don’t understand the analogy. I see how deleting a superhuman AI with untold potential is a lot like killing many humans, but isn’t it a point of God’s omnipotence that humans can never even theoretically present a threat to Him or His creation (a threat that he doesn’t approve of, anyway)?
Within the fictional universe of the Old and New Testaments, it seems clear that God has certain preferences about the state of the world, and that for some unspecified reason God does not directly impose those preferences on the world. Instead, God created humans and gave them certain instructions which presumably reflect or are otherwise associated with God’s preferences, then let them go do what they would do, even when their doing so destroys things God values. And then every once in a while, God interferes with their doing those things, for reasons that are unclear.
None of that presupposes omnipotence in the sense that you mean it here, although admittedly many fans of the books have posited the notion that God possesses such omnipotence.
That said, I agree that the analogy is poor. Then again, all analogies will be poor. A superhumanly powerful entity doing and refraining from doing various things for undeclared and seemingly pointless and arbitrary motives is difficult to map to much of anything.
Yeah, I kind of realize that the problems of omnipotence, making rocks that one can’t lift and all that, only really became part of the religious discourse in a more mature and reflection-prone culture, the ways of which would already have felt alien to the OT’s authors.
Taking the old testament god as he is in the book of Genesis this isn’t clear at all. At least when talking about the long term threat potential of humans.
Then the LORD God said, “Behold, the man has become like one of Us, knowing good and evil; and now, he might stretch out his hand, and take also from the tree of life, and eat, and live forever “--
or
And they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth.
And the Lord came down to see the city and the tower, which the children of men builded.
And the Lord said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.
Go to, let us go down, and there confound their language, that they may not understand one another’s speech.
The whole idea of what exactly God is varied during the long centuries in which the stories where written.
Hello. I expect you won’t like me because I’m Christian and female and don’t want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should. I’ve been lurking for a long time. The first time I found this place I followed a link to OvercomingBias from AnneC’s blog and from there, without quite realizing it, found myself archive-binging and following another link here. But then I stopped and left and then later I got linked to the Sequences from Harry Potter and the Methods of Rationality.
A combination of the whole evaporative cooling thing and looking at an old post that wondered why there weren’t more women convinced me to join. You guys are attracting a really narrow demographic and I was starting to wonder whether you were just going to turn into a cult and I should ignore you.
...And I figure I can still leave if that ends up happening, but if everyone followed the logic I just espoused, it’ll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world. I’d rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don’t agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Okay, ready to be shouted down. I’ll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I’ll probably just leave soon anyway. Nothing good can come of this. I don’t know why I’m doing this. I shouldn’t be here; you don’t want me here, not to mention I probably shouldn’t bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
Wow. Some of your other posts are intelligent, but this is pure troll-bait.
EDIT: I suppose I should share my reasoning. Copied from my other post lower down the thread:
Classic troll opening. Challenges us to take the post seriously. Our collective ‘manhood’ is threatened if react normally (eg saying “trolls fuck off”).
Insulting straw man with a side of “you are an irrational cult”.
“Seriously, I’m one of you guys”. Concern troll disclaimer. Classic.
Again undertones of “you are a cult and you must accept my medicine or turn into a cult”. Again we are challenged to take it seriously.
I didn’t quite understand this part, but again, straw man caricature.
Theres a rhetorical meme on 4chan that elegantly deals with this kind of crap:
’nuff said
classic reddit downvote preventer:
Post a troll or other worthless opinion
Imply that the hivemind wont like it
Appeal to people’s fear of hivemind
Collect upvotes.
again implying irrational insider/outsider dynamic, hivemind tendencies and even censorship.
Of course the kneejerk response is “no no, we don’t hate you and we certainly won’t censor you; please we want more christian trolls like you”. EDIT: Ha! well predicted I say. I just looked at the other 500 responses. /EDIT
And top it off with a bit of sympathetic character, damsel-in-distress crap. EDIT: Oh and the bit about hating God is a staw-man. /EDIT
This is not necessarily deliberate, but it doesn’t have to be.
Trolling is a art. and Aspiring_Knitter is a artist. 10⁄10.
You’ve got an interesting angle there, but I don’t think AspiringKnitter is a troll in the pernicious sense—her post has led to a long reasonable discussion that she’s made a significant contribution to.
I do think she wanted attention, and her post had more than a few hooks to get it. However, I don’t think it’s useful to describe trolls as “just wanting attention”. People post because they want attention. The important thing is whether they repay attention with anything valuable.
I don’t have the timeline completely straight, but it looks to me like AspiringKnitter came in trolling and quickly changed gears to semi-intelligent discussion. Such things happen. AspiringKnitter is no longer a troll, that’s for sure; like you say “her post has led to a long reasonable discussion that she’s made a significant contribution to”.
All that, however, does not change the fact that this particular post looks, walks, and quacks like troll-bait and should be treated as such. I try to stay out of the habit of judging posts on the quality of the poster’s other stuff.
I don’t know if this is worth saying, but you look a lot more like a troll to me than she does, though of a more subtle variety than I’m used to.
You seem to be taking behavior which has been shown to be in the harmless-to-useful range and picking a fight about it.
Thanks for letting me know. If most people disagree with my assessment, I’ll adjust my troll-resistance threshold.
I just want to make sure we don’t end up tolerating people who appear to have trollish intent. AspiringKnitter turned out to be positive, but I still think that particular post needed to be called out.
Well Kept Gardens Die By Pacifism.
You’re welcome. This makes me glad I didn’t come out swinging—I’d suspected (actually I had to resist the temptation to obsess about the idea) that you were a troll yourself.
If you don’t mind writing about it, what sort of places have you been hanging out that you got your troll sensitivity calibrated so high? I’m phrasing it as “what sort of places” in case you’d rather not name particular websites.
4chan, where there is an interesting dynamic around trolling and getting trolled. Getting trolled is low-status, calling out trolls correctly that no-one else caught is high-status, and trolling itself is god-status, calling troll incorrectly is low status like getting trolled. With that culture, the art of trolling, counter-trolling and troll detection gets well trained.
I learned a lot of trolling theory from reddit, (like the downvote preventer and concern trolling). The politics, anarchist, feminist and religious subreddits have a lot of good cases to study (they generally suck at managing community, tho).
I learned a lot of relevant philosophy of trolling and some more theory from /i/nsurgency boards and wikis (start at partyvan.info). Those communities are in a sorry state these days.
Alot of what I learned on 4chan and /i/ is not common knowledge around here and could be potentially useful. Maybe I’ll beat some of it into a useful form and post it.
For one thing, the label “trolling” seems like it distracts more than it adds, just like “dark arts.” AspiringKnitter’s first post was loaded with influence techniques, as you point out, but it’s not clear to me that pointing at influence techniques and saying “influence bad!” is valuable, especially in an introduction thread. I mean, what’s the point of understanding human interaction if you use that understanding to botch your interactions?
There is a clear benefit to pointing out when a mass of other people are falling for influence techniques in a way you consider undesirable.
It is certainly worth pointing out the techniques, especially since it looks like not everyone noticed them. What’s not clear to me is the desirability of labeling it as “bad,” which is how charges of trolling are typically interpreted.
I see your point, but that post wasn’t using dark arts to persuade anything, it looked very much like the purpose was controversy. Hence trolling.
Son, I am disappoint.
are you implying there was persuasion going on? or that I used “dark arts” when I shouldn’t?
Easiest first: I introduced “dark arts” as an example of a label that distracted more than it added. It wasn’t meant as a reference to or description of your posts.
In your previous comment, you asked the wrong question (‘were they attempting to persuade?’) and then managed to come up with the wrong answer (‘nope’). Both of those were disappointing (the first more so) especially in light of your desire to spread your experience.
The persuasion was “please respond to me nicely.” It was richly rewarded: 20 welcoming responses (when most newbies get 0 or 1), and the first unwelcoming response got downvoted quickly.
The right question is, what are our values, here? When someone expressing a desire to be welcomed uses influence techniques that further that end, should we flip the table over in disgust that they tried to influence us? That’ll show them that we’re savvy customers that can’t be trolled! Or should we welcome them because we want the community to grow? That’ll show them that we’re worth sticking around.
I will note that I upvoted this post, because in the version that I saw it started off with “Some of your other posts are intelligent” and then showed many of the tricks AspiringKnitter’s post used. Where I disagree with you is the implication that we should have rebuked her for trolling. The potential upsides of treating someone with charity and warmth is far greater than the potential downsides of humoring a troll for a few posts.
Ok. That makes sense.
Was parent downvoted for asking questions or for improper capitalization?
Was I downvoted for asking about downvotes or false dilemma?
Was I downvoted for meta-humor or carrying the joke too far?
That’s interesting—I’ve never hung out anywhere that trolling was high status.
In reddit and the like, how is consensus built around whether someone is a troll and/or is trolling in a particular case?
I think I understand concern trolling, which I understand to be giving advice which actually weakens the receiver’s position, though I think the coinage “hlep” from Making Light is more widely useful—inappropriate, annoying/infuriating advice which is intended to be helpful but doesn’t have enough thought behind it, but what’s downvote preventer?
Hlep has a lot of overlap with other-optimizing.
I’d be interested in what you have to say about the interactions at 4chan and /i/, especially about breakdowns in political communities.
I’ve been mulling the question of how you identify and maintain good will—to my mind, a lot of community breakdown is caused by tendencies to amplify disagreements between people who didn’t start out being all that angry at each other.
On reddit there is just upvotes and downvotes. Reddit doesn’t have developed social mechanisms for dealing with trolls, because the downvotes work most of the time. Developing troll technology like the concern troll and the downvote preventer to hack the hivemind/vote dynamic is the only way to succeed.
4chan doesn’t have any social mechanisms either, just the culture. Communication is unnecessary for social/cultural pressure to work, interestingly. Once the countertroll/troll/troll-detector/trolled/troll-crier hierarchy is formed by the memes and mythology, the rest just works in your own mind. “fuck I got trolled, better watch out better next time”, “all these people are getting trolled, but I know the OP is a troll; I’m better than them” “successful troll is successful” “I trolled the troll”. Even if you don’t post them and no-one reacts to them, those thoughts activate the social shame/status/etc machinery.
Not quite. A concern troll is someone who comes in saying “I’m a member of your group, but I’m unsure about this particular point in a highly controversial way” with the intention of starting a big useless flame-war.
Havn’t heard of hlep. seems interesting.
The downvote preventer is when you say “I know the hivemind will downvote me for this, but...” It creates association in the readers mind between downvoting and being a hivemind drone, which people are afraid of, so they don’t downvote. It’s one of the techniques trolls use to protect the payload, like the way the concern troll used community membership.
Yes. A big part of trolling is actually creating and fueling those disagreements. COINTELPRO trolling is disrupting peoples ability to identify trolls and goodwill. There is a lot of depth and difficulty to that.
Wow, I don’t post over Christmas and look what happens. Easiest one to answer first.
Wow, thanks!
You’re a little mean.
You don’t need an explanation of 2, but let me go through your post and explain about 1.
Huh. I guess I could have come up with that explanation if I’d thought. The truth here is that I was just thinking “you know, they really won’t like me, this is stupid, but if I make them go into this interaction with their eyes wide open about what I am, and phrase it like so, I might get people to be nice and listen”.
That was quite sincere and I still feel that that’s a worry.
Also, I don’t think I know more about friendliness than EY. I think he’s very knowledgeable. I worry that he has the wrong values so his utopia would not be fun for me.
Wow, you’re impressive. (Actually, from later posts, I know where you get this stuff from. I guess anyone could hang around 4chan long enough to know stuff like that if they had nerves of steel.) I had the intuition that this will lead to fewer downvotes (but note that I didn’t lie; I did expect that it was true, from many theist-unfriendly posts on this site), but I didn’t think consciously this procedure will appeal to people’s fear of the hivemind to shame them into upvoting me. I want to thank you for pointing that out. Knowing how and why that intuition was correct will allow me to decide with eyes wide open whether to do something like that in the future, and if I ever actually want to troll, I’ll be better at it.
Actually, I just really need to learn to remember that while I’m posting, proper procedure is not “allow internal monologue to continue as normal and transcribe it”. You have no idea how much trouble that’s gotten me into. (Go ahead and judge me for my self-pitying internal monologue if you want. Rereading it, I’m wondering how I failed to notice that I should just delete that part, or possibly the whole post.) On the other hand, I’d certainly hope that being honest makes me a sympathetic character. I’d like to be sympathetic, after all. ;)
Thank you. It wasn’t, but as you say, it doesn’t have to be. I hope I’ll be more mindful in the future, and bear morality in mind in crafting my posts here and elsewhere. I would never have seen these things so clearly for myself.
Thanks, but no. LOL.
I’d upvote you, but otherwise your post is just so rude that I don’t think I will.
Note that declaring Crocker’s rules and subsequently complaining about rudeness sends very confusing signals about how you wish to be engaged with.
Thank you. I was complaining about his use of needless profanity to refer to what I said, and a general “I’m better than you” tone (understandable, if he comes from a place where catching trolls is high status, but still rude). I not only approve of being told that I’ve done something wrong, I actually thanked him for it. Crocker’s rules don’t say “explain things in an insulting way”, they say “don’t soften the truths you speak to me”. You can optimize for information—and even get it across better—when you’re not trying to be rude. For instance,
That would not convey less truth if it weren’t vulgar. You can easily communicate that someone is tugging people’s heartstrings by presenting as a highly sympathetic damsel in distress without being vulgar.
Also, stuff like this:
That makes it quite clear that nyan_sandwich is getting a high from this and feels high-status because of behavior like this. While that in itself is fine, the whole post does have the feel of gloating to it. I simultaneously want to upvote it for information and downvote it for lowering the overall level of civility.
Here’s my attempt to clarify how I wish to be engaged with: convey whatever information you feel is true. Be as reluctant to actively insult me as you would anyone else, bearing in mind that a simple “this is incorrect” is not insulting to me, and nor is “you’re being manipulative”. “This is crap” always lowers the standard of debate. If you spell out what’s crappy about it, your readers (including yours truly) can grasp for themselves that it’s crap.
Of course, if nyan_sandwich just came from 4chan, we can congratulate him on being an infinitely better human being than everyone else he hangs out with, as well as on saying something that isn’t 100% insulting, vulgar nonsense. (I’d say less than 5% insulting, vulgar nonsense.) Actually, his usual contexts considered, I may upvote him after all. I know what it takes to be more polite than you’re used to others being.
That doesn’t sound right. Here’s a quote from Crocker’s rules:
Another quote:
Quote from our wiki:
There’s a decision theoretic angle here. If I declare Crocker’s rules, and person X calls me a filthy anteater, then I might not care about getting valuable information from them (they probably don’t have any to share) but I refrain from lashing out anyway! Because I care about the signal I send to person Y who is still deciding whether to engage with me, who might have a sensitive detector of Crocker’s rules violations. And such thoughtful folks may offer the most valuable critique. I’m afraid you might have shot yourself in the foot here.
I think this is generally correct. I do wonder about a few points:
If I am operating on Crocker’s Rules (I personally am not, mind, but hypothetically), and someone’s attempt to convey information to me has obvious room for improvement, is it ever permissible for me to let them know this? Given your decision theory point, my guess would be “yes, politely and privately,” but I’m curious as to what others think as well. As a side note, I presume that if the other person is also operating by Crocker’s Rules, you can say whatever you like back.
Do you mean improvement of the information content or the tone? If the former, I think saying “your comment was not informative enough, please explain more” is okay, both publicly and privately. If the latter, I think saying “your comment was not polite enough” is not okay under the spirit of Crocker’s rules, neither publicly nor privately, even if the other person has declared Crocker’s rules too.
When these things are orthogonal, I think your interpretation is clear, and when information would be obscured by politeness the information should win—that’s the point of Crocker’s Rules. What about when information is obscured by deliberate impoliteness? Does the prohibition on criticizing impoliteness win, or the permit for criticizing lack of clarity? In any case, if the other person is not themselves operating by Crocker’s Rules, it is of course important that your response be polite, whatever it is.
Basically, no. If you want to criticize people for being rude to you just don’t operate by Crocker’s rules. Make up different ones.
Question: do Crocker’s rules work differently here than I’m used to? I’m used to a communication style where people say things to get the point across, even though such things would be considered rude in typical society, not for being insulting but for pointless reasons, and we didn’t do pointless things just to be typical. We were bluntly honest with each other, even (actually especially) when people were wrong (after all, it was kind of important that we convey that information accurately, completely and as quickly as possible in some cases), but to be deliberately insulting when information could have been just as easily conveyed some other way (as opposed to when it couldn’t be), or to be insulting without adding any useful information at all, was quite gauche. At one point someone mentioned that if we wanted to invoke that in normal society, say we were under Crocker’s rules.
So it looks like the possibilities worth considering are:
Someone LIED just to make it harder for us to fit in with normal society!
Someone was just wrong.
You’re wrong.
Crockering means different things to different people.
Which do you think it is?
Cousin it’s comment doesn’t leave much room for doubt.
Baiting and switching by declaring Crocker’s rules then shaming and condescending when they do not meet your standard of politeness could legitimately be considered a manipulative social ploy.
I didn’t consider Crocker’s rules at all when reading nyan’s comment and it still didn’t seem at all inappropriate. You being outraged at the ‘vulgarity’ of the phrase “damsel in distress crap” is a problem with your excess sensitivity and not with the phrase. As far as I’m concerned “damsel in distress crap” is positively gentle. I would have used “martyrdom bullshit” (but then I also use bullshit as a technical term).
Crocker’s rules is about how people speak to you. But for all it is a reply about your comment nyan wasn’t even talking to you. He was talking to the lesswrong readers warning them about perceived traps they are falling into when engaging with your comment.
Like it or not people tend to reciprocate disrespect with disrespect. While you kept your comment superficially civil and didn’t use the word ‘crap’ you did essentially call everyone here a bunch of sexist Christian hating bullies. Why would you expect people to be nice to you when you treat them like that?
The impression I have is that calling Crocker’s rules being never acting offended or angry at the way people talk to you, with the expectation that you’ll get more information if people don’t censor themselves out of politeness.
Some of your reactions here are not those I expect from someone under Crocker’s rules (who would just ignore anything insulting or offensive).
So maybe what you consider as “Crocker’s rules” is what most people here would consider “normal” discussion, so when you call Crocker’s rules, people are extra rude.
I would suggest just dropping reference to Crocker’s rules, I don’t think they’re necessary for having a reasonable discussion, and they they put pressure on the people you’re talking to to either call Crocker’s rules too (giving you carte blanche to be rude to them), otherwise they look uptight or something.
Possible. I’m inexperienced in talking with neurotypicals. All I know is what was drilled into me by them, which is basically a bunch of things of the form “don’t ever convey this piece of information because it’s rude” (where the piece of information is like… you have hairy arms, you’re wrong, I don’t like this food, I don’t enjoy spending time with you, this gift was not optimized for making me happy—and the really awful, horrible dark side where they feel pressured never to say certain things to me, like that I’m wrong, they’re annoyed by something I’m doing, I’m ugly, I sound stupid, my writing needs improvement—it’s horrible to deal with people who never say those things because I can never assume sincerity, I just have to assume they’re lying all the time) that upon meeting other neurodiverse I immediately proceeded to forget all about. And so did they. And THAT works out well. It’s accepted within that community that “Crocker’s rules” is how the rest of the world will refer to it.
Anyway, if I’m not allowed to hear the truth without having to listen to whatever insults anyone can come up with, then so be it, I really want to hear the truth and I know it will never be given to me otherwise. But there IS supposed to be something between “you are not allowed to say anything to me except that I’m right about everything and the most wonderful special snowflake ever” and “insult me in every way you can think of”, even if the latter is still preferable to the former. (Is this community a place with a middle ground? If so, I didn’t think such existed. If so, I’ll gladly go by the normal rules of discussion here.)
My experience of LW is that:
the baseline interaction mode would be considered rude-but-not-insulting by most American subcultures, especially neurotypical ones
the interaction mode invoked by “Crocker’s rules” would be considered insulting by most American subcultures, especially neurotypical ones
there’s considerable heterogeneity in terms of what’s considered unacceptably rude
there’s a tentative consensus that dealing with occasional unacceptable rudeness is preferable to the consequences of disallowing occasional unacceptable rudeness, and
the community pushes back on perceived attempts to enforce politeness far more strongly than it pushes back on perceived rudeness.
Dunno if any of that answers your questions.
I would also say that nobody here has come even remotely close to “insult in every conceivable way” as an operating mode.
YES!
There seem to be a lot of new people introducing themselves on the Welcome thread today/yesterday. I would like to encourage everyone to maybe be just a tad bit more polite, and cognizant of the Principle of Charity, at least for the next week or two, so all our newcomers can acclimate to the culture here.
As someone who has only been on this site for a month or two (also as a NT, socially-skilled, female), I have spoken in the past about my difficulties dealing with the harshness here. I ended up deciding not to fight it, since people seem to like it that way, and that’s ok. But I do think the community needs to be aware that this IS in fact an issue that new (especially NT) people are likely to shy away from, and even leave or just not post because of.
tl;dr- I deal with the “rudeness”, but want people to be aware that is does in fact exist. Those of us who dislike it have just learned to keep our mouths shut and deal with it. There are a lot of new people now, so try to soften it for the next week or two.
(Note: I have not been recently down-voted, flamed, or crushed, so this isn’t just me raging.)
I’m unlikely to change my style of presentation here as a consequence of new people arriving, especially since I find it unlikely that the wave of introductions reflects an actual influx of new people, as opposed to an influx of activity on the Welcome threads making the threads more visible and inspiring introductions.
If my presentation style is offputting to new people who prefer a different style, I agree that’s unfortunate. I’m not sure that my dealing by changing my style for their benefit—supposing they even benefit from it—is better.
You are correct, in that I do believe that many of the introductions here are people who have been lurking a long time, but are following the principle of social proof, and just introducing themselves now that everyone else is.
However, I do think that once they have gone through the motions of setting up an account an publishing their introduction, that self-consistency will lead them to continue to be more active on this site; They have just changed their self-image to that of “Member of LW” after all!
Your other supposition- that they might not benefit from it… I will tell you that I have almost quit LW many times in the past month, and it is only a lack of anything better out there that has kept me here.
My assumption is that you are OK with this, and feel that people that can’t handle the heat should get out of the kitchen anyway, so to speak.
I think that is a valid point, IFF you want to maintain LW as it currently stands. I will admit that my preferences are different in that I hope LW grows and gets more and more participants. I also hope that this growth causes LW to be more “inclusive” and have a higher percentage of females (gender stereotyping here, sorry) and NTs, which will in effect lower the harshness of the site.
So I think our disagreement doesn’t stem from “bad” rationality on either of our parts. It’s just that we have different end-goals.
I am going to share with you a trick that is likely to make staying here (or anywhere else with some benefit) easier...
Prismattic’s guaranteed (or your money back) method for dealing with stupid or obnoxious text on the Internet:
Read the problematic material as though it is being performed by Gonzo’s chickens, to the tune of the William Tell Overture.
When this gets boring, you can alternate with reading it as performed by the Swedish chef, to the tune of Ride of the Valkyries.
Really, everything becomes easier to bear when filtered this way. I wish separating out emotional affect was as easy in tense face-to-face situations.
Can you confirm that you’re actually responding to what I wrote?
If so, can you specify what it is about my presentation style that has encouraged you to almost quit?
I’m sorry, I did not want to imply that you specifically made me want to quit. In all honesty, the lack of visual avatars means I can’t keep LW users straight at all.
But since you seem to be asking about your presentation style, here is me re-writing your previous post in a way that is optimized for a conversation I would enjoy, without feeling discomfort.
Original:
How I WISH LW operated (and realize that 95% of you do not wish this)
I asked about my presentation style because that’s what I wrote about in the first place, and I couldn’t tell whether your response to my comment was actually a response to what I wrote, or some more general response to some more general thing that you decided to treat my comment as a standin for.
I infer from your clarification that i was the latter. I appreciate the clarification.
Your suggested revision of what I said would include several falsehoods, were I to have said it.
I had to fill in some interpretations of what I thought you could have meant. If what I filled in was false, it is just that I do not know your mind as well as you do. If I did, I could fill in things that were true.
Politeness does not necessarily require falsity. Your post lacked the politeness parts, so I had to fill in politeness parts that I thought sounded like reasonable things you might be thinking. Were you trying to be polite, you could fill in politeness parts with things that were actually true for you (and not just my best guesses.)
I agree that politeness does not require falsity.
I infer from your explanation that your version of politeness does require that I reveal more information than I initially revealed. Can you say more about why?
I should hope not. I can conceive of more ways to insult than I can type in a day, depending on how we want to count ‘ways’.
How do I insult thee? Let me count the ways.
I insult thee to the depth and breadth and height
My mind can reach, when feeling out of sight
For the lack of Reason and the craft of Bayes.
Turning and turning in the narrowing spiral
The user cannot resist those memes which are viral;
The waterline is lowered; beliefs begin to cool;
Mere tribalism is loosed, upon Lesswrong’s school,
The grey-matter is killed, and everywhere
The knowledge of one’s ignorance is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.
Heh. I’m not sure why you felt compelled to rhyme there, though; Yeats didn’t.
I must confess, I have never actually heard the words ‘gyre’ and ‘falconer’. I assumed they could be pronounced in such a way that it would sound like a rhyme. In my head, they both were pronounced like ‘hear’. Likewise, I assumed one could pronounce ‘world’ and ‘hold’ in such a way that they could sort-of rhyme. In my head, ‘hold’ was pronounced ‘held’ and ‘world’ was pronounced ‘weld.’
http://www.youtube.com/watch?v=OEunVObSnVM
Apparently, this is not the case. Oops.
Although I must admit I was tempted take it up as a novel challenge just to demonstrate how absurd the hyperbole was.
Returning to this… if you’re still tempted, I’d love to see your take on it. Feel free to use me as a target if that helps your creativity, though I’m highly unlikely to take anything you say in this mode seriously. (That said, using a hypothetical third party would likely be emotionally easier.)
Unrelatedly: were you the person who had the script that sorts and display’s all of a user’s comments? I’ve changed computers since being handed that pointer and seem to have misplaced the pointer.
No, that’d be Wei Dai, I think; eg. I recently used http://www.ibiblio.org/weidai/lesswrong_user.php?u=Eliezer_Yudkowsky to point out that Eliezer has more than one negative comment (contra the cult leader accusation).
Hah! Awesome. Thank you!
You might like this comment.
This should be strongly rejected, if Crocker’s Rules are ever going to do more good than harm. I do not mean that it is not the case given existing norms (I simply do not know one way or the other), but that norms should be established such that this is clearly not the case. Someone who is unable to operate according to Crocker’s Rules attempting to does not improve discourse or information flow—no one should be pressured to do so.
I agree with you in the abstract.
The problem is, the more a community is likely to consider X a “good” practice, the more it is likely to think less of those who refuse to do do X, whatever X is; so I don’t see a good way of avoiding negative connotations to “unable to operate according to Crocker’s Rules”.
… that is, unless the interaction is not symmetric, so that when one side announces Crocker’s rules, there is no implicit expectation that the other side should do the same (with the associated status threat); for example if on my website I mention Crocker’s rules next to the email form or something.
But in a peer-to-peer community like this, that expectation is always going to be implicit, and I don’t see a good way to make it disappear.
Well, here’s me doing my part: I don’t declare Crocker’s rules, and am unlikely to ever do so. Others can if they wish.
As I’ve mentioned before, I am not operating by Crocker’s rules. I try to be responsible for my emotional state, but realize that I’m not perfect at this, so tell me the truth but there’s no need to be a dick about it. I am not unlikely, in the future, to declare Crocker’s rules with respect to some specific individuals and domains, but globally is unlikely in the foreseeable future.
Here’s my part too: I don’t declare Crocker’s rules and do not commit to paying any heed to whether others have declared Crocker’s rules. I’ll speak to people however I see fit—which will include taking into account the preferences of both the recipient and any onlookers to precisely the degree that seems appropriate or desirable at the time.
I don’t know about getting rid of it entirely, but we can at least help by stressing the importance of the distinction, and choosing to view operation by Crocker’s rules as rare, difficult, unrelated to any particular discussion, and of only minor status boost.
Another approach might be to make all Crocker communication private, and expect polite (enough) discourse publicly.
Wikipedia and Google seem to think Eliezer is the authority on Crocker’s Rules. Quoting Eliezer on sl4 via Wikipedia:
Also, from our wiki:
Looking hard for another source, something called the DoWire Wiki has this unsourced:
So if anyone is using Crocker’s Rules a different way, I think it’s safe to say they’re doing it wrong, but only by definition. Maybe someone should ask Crocker, if they’re concerned.
OK.
FWIW, I agree that nyan-sandwich’s tone was condescending, and that they used vulgar words.
I also think “I suppose they can’t be expected to behave any better, we should praise them for not being completely awful” is about as condescending as anything else that’s been said in this thread.
Yeah, you’re probably right. I didn’t mean for that to come out that way (when I used to spend a lot of time on places with low standards, my standards were lowered, too), but that did end up insulting. I’m sorry, nyan_sandwich.
A lot of intelligent folks have to spend a lot of energy trying not to be rude, and part of the point of Crocker’s Rules is to remove that burden by saying you won’t call them on rudeness.
Not all politeness is inconsistent with communicating truth. I agree that “Does this dress make me look fat” has a true answer and a polite answer. It’s worth investing some attention into figuring out which answer to give. Often, people use questions like that as a trap, as mean-spirited or petty social and emotional manipulation. Crocker’s Rule is best understood as a promise that the speaker is aware of this dynamic and explicitly denies engaging in it.
That doesn’t license being rude. If you are really trying to help someone else come to a better understanding of the world, being polite helps them avoid cognitive biases that would prevent them from thinking logically about your assertions. In short, Crocker’s Rule does not mean “I don’t mind if you are intentionally rude to me.” It means “I am aware that your assertions might be unintentionally rude, and I will be guided by your intention to inform rather than interpreting you as intentionally rude.
Right, I wasn’t saying anything that contradicted that. Rather, some of us have additional cognitive burden in general trying to figure out if something is supposed to be rude, and I always understood part of the point of Crocker’s Rules to be removing that burden so we can communicate more efficiently. Especially since many such people are often worth listening to.
For what it’s worth, I generally see some variant of “please don’t flame me” attached only to posts which I’d call inoffensive even without it. I’m not crazy about seeing “please don’t flame me”, but I write it off to nervousness and don’t blame people for using it.
Caveat: I’m pretty sure that “please don’t flame me” won’t work in social justice venues.
Excellent analysis. I just changed my original upvote for that post to a downvote, and I must admit that it got me in exactly every way you explained.
I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.
Upvoted
I disagree. It’s an honest expression of feeling, and a reasonable statement of expectations, given LW’s other run-ins with self-identified theists. It may be a bit overstated, but not terribly much.
Do you really think it’s only a bit overstated? I mean, has anybody been banned for being religious? And has anybody here indicated that they hate Christians without immediately being called on falling into blue vs. green thinking?
From her other posts, AspiringKnitter strikes me as being open-minded and quite intelligent, but that last paragraph really irks me. It’s self-debasing in an almost manipulative way—as if she actually wants us to talk to her like we “only want [her] to hate God” or as if we “really hate Christians”. Anybody who has spent any non-trivial amount of time on LW would know that we certainly don’t hate people we disagree with, at least to the best of my knowledge, so asserting that is not a charitable or reasonable expectation. Plus, it seems that it would now be hard(er) to downvote her because she specifically said she expects that, even given a legitimate reason to downvote.
I agree. See my other post deconstructing the troll-techniques used.
Well, some of Eliezer’s posts about religion and religious thought have been more than a little harsh. (I couldn’t find it, but there was a post where he said something along the lines of “I have written about religion as the largest imaginable plague on thinking...”) They didn’t explicitly say that religious people are to be scorned, but it’s very easy to read in that implication, especially since many people who are equally vocal about religion being bad do hold that opinion.
Banned? Not that I know of. But there have certainly been Christians who have been serially downvoted, perhaps more than they deserved.
“Hate” may be too strong a word, but the original poster’s meaning seems to lean closer to “openly intolerant”, which is true and partially justified.
EDIT: Looking back, the original poster was asking if they would be banned, not claiming so. So that doesn’t seem to be a valid criticism.
Being honest and having reasonable expectations of being treated like a troll does not disqualify a post from being a troll.
Classic troll opening. Challenges us to take the post seriously. Our collective ‘manhood’ is threatened if react normally (eg saying “trolls fuck off”).
Insulting straw man with a side of “you are an irrational cult”.
“Seriously, I’m one of you guys”. Concern troll disclaimer. Classic.
Again undertones of “you are a cult and you must accept my medicine or turn into a cult”. Again we are challenged to take it seriously.
I didn’t quite understand this part, but again, straw man caricature.
Theres a rhetorical meme on 4chan that elegantly deals with this kind of crap:
’nuff said
classic reddit downvote preventer:
Post a troll or other worthless opinion
Imply that the hivemind wont like it
Appeal to people’s fear of hivemind
Collect upvotes.
again implying irrational insider/outsider dynamic, hivemind tendencies and even censorship.
Of course the kneejerk response is “no no, we don’t hate you and we certainly won’t censor you; please we want more christian trolls like you”
And top it off with a bit of sympathetic character, damsel-in-distress crap. EDIT: Oh and the bit about hating God is a staw-man. /EDIT
This is not necessarily deliberate, but it doesn’t have to be.
Trolling is a art. and Aspiring_Knitter is a artist. 10⁄10.
I don’t follow how indicating that she’s actually read the site can be a mark against her. If the comment had not indicated familiarity with the site content, would you then describe it as less trollish?
it’s a classic troll technique. It’s not independent of the other trollish tendencies. Alone, saying those things does not imply troll, but in the presence of other troll-content it is used to raise perceived standing and lower the probability that they are a troll.
EDIT: and yes, trollish opinions without trollish disclaimers raise probability of plain old stupidity.
EDIT2: Have to be very careful with understanding the causality of evidence supplied by hostile agents. What Evidence Filtered Evidence and so on,
So… voicing disagreement boldly is trolling, voicing it nervously is trolling and trying to prevent being called out. Signalling distance from the group is trolling and accusations of hive mind, signalling group membership is trolling and going “Seriously, I’m one of you guys”. Joking about the image a group idea’s have, in the same way the group itself does, is straw-manning and caricature, seriously worrying about those ideas is damsel-in-distress crap.
Okay, so I see the bits that are protection against being called a troll. What I don’t see is the trolling. Is it “I’m a Christian”? If you think all Christians should pretend to be atheists… well, 500 responses disagree with you. Is it what you call straw men? I read those as jokes about what we look like to outsiders, but even if they’re sincere, they’re surrounded with so much display of uncertainty that “No, that’s not what we think.” should end it then and there. And if AspiringKnitter where a troll, why would she stop trolling and write good posts right after that?
Conclusion: You fail the principle of charity forever. You’re a jerk. I hope you run out of milk next time you want to eat cereal.
Deliberate, active straw manning sarcasm for the purpose of giving insult and conveying contempt.
Yes, trolling is distinguished from what nyan called “troll-bait” by, for most part, duration. Trolls don’t stop picking fights and seem to thrive on the conflict they provoke. If nyan tried to claim that AspiringKnitter was a troll in general—and fail to update on the evidence from after this comment—he would most certainly be wrong.
He wasn’t very charitable in his comment, I certainly would have phrased criticism differently (and directed most of it at those encouraging damsel in distress crap.) But for your part you haven’t failed the principle of charity—you have failed to parse language correctly and respond to the meaning contained therein.
This is not ok.
The cereal thing is comically mild. The impulse to wish bad things on others is a pretty strong one and I think it’s moderated by having an outlet to acknowledge that it’s silly in this or maybe some other way—I’d rather people publicly wish me to run out of milk than privately wish me dead.
Calling nyan a jerk in that context wasn’t ok with me and nor was any joke about wanting harm to come upon him. It was unjustified and inappropriate.
I don’t much care what MixedNuts wants to happen to nyan. The quoted combination of words constitutes a status transaction of a kind I would see discouraged. Particularly given that we don’t allow reciprocal personal banter of the kind this sort insult demands. If, for example, nyan responded with a pun on a keyword and a reference to Mixed’s sister we wouldn’t allow it. When insults cannot be returned in kind the buck stops with the first personal insult. That is, Mixed’s.
This is admirably compelling.
Upvoted.
I am happy that someone other than me gets upset when they see these “jokes” on here.
(I also downvoted the “jerk” comment)
[emphasis mine]. You assume that nyan is male. Where did “he” say that? nyan explicitly claims to be a “genderless internet being” in the introductions thread.
Last LW survey came out with 95% male, IIRC. 95% sure of something is quite strong. nyan called Aspiring_Knitter a troll on much less solid evidence. Also, you come from the unfortunate position of not having workable genderless pronouns.
I’ll allow it.
That’s fair. I used male because you sounded more like a male—and still do. If you are a genderless internet being then I will henceforth refer to you as an ‘it’. If you were a genderless human I would use the letter ‘v’ followed by whatever letters seem to fit the context.
I wished nyan_sandwich to stub eir toe, but immediately regretted it as too harsh.
Well, who knows what MixedNuts’ wishes? Wishing wedrifid runs out of milk doesn’t exclude this latter possibility.
I’m also reminded, of all the silly things, (the overwhelmingly irrational) Simone Weil:
Everyone does, because I said it!
Delicious controversy. Yum. I might have a lulz-relapse and become a troll.
Burn the witch!
Disagreement is not trolling. Neither is nervous disagreement. The hivemind thing had nothing to do with status signaling, it was about the readers insecurity. The group membership/cultural knowledge signaling thing is almost always used as a delivery vector for a ignoble payload.
They didn’t look like jokes or uncertainty to me. I am suddenly gripped by a mortal fear that I may not have a sense of humor. The damsel in distress thing was unconnected to the ideas thing.
TL;DR: what wedrifid said.
Again, they still don’t look like jokes. If everyone else decides they were jokes, I will upmod my belief that I am a humorless internet srs-taker. EDIT: oh I forgot to address the AS is not troll claim. It has been observed, in the long history of the internet, that sometimes a person skilled in the trolling arts will post a masterfully crafted troll-bait, and then decide to forsake their lulzy crusade for unknown reasons. /EDIT
Joke is on you. nyan_sandwich″s human alter-ego doesn’t eat cereal.
nyan_sandwich may have been stricken with a minor case of confirmation bias when they made that assessment, but I think it still stands.
That’s some interesting reasoning. I’ve met people before who avoided leaving an evaporatively cooling group because they recognized the process and didn’t want to contribute to it, but you might be the first person I’ve encountered who joined a group to counteract it (or to stave it off before it begins, given that LW seems to be both growing and to some extent diversifying right now). Usually people just write groups like that off. Aside from the odd troll or ideologue that claims similar motivations but is really just looking for a fight, at least—but that doesn’t seem to fit what you’ve written here.
Anyway. I’m not going to pretend that you aren’t going to find some hostility towards Abrahamic religion here, nor that you won’t be able to find any arguably problematic (albeit mostly unconsciously so) attitudes regarding sex and/or gender. Act as your conscience dictates should you find either one intolerable. Speaking for myself, though, I take the Common Interest of Many Causes concept seriously: better epistemology is good for everyone, not just for transhumanists of a certain bent. Your belief structure might differ somewhat from the tribal average around here, but the actual goal of this tribe is to make better thinkers, and I don’t think anyone’s going to want to exclude you from that as long as you approach it in good faith.
In fewer words: welcome to Less Wrong.
I don’t think there are any of those around here. Most of us would prefer you didn’t even believe in gods!
Hi, Aspiring Knitter. I also find the Less Wrong culture and demographics quite different from my normal ones (being a female in the social sciences who’s sympathetic to religion though not a believer. Also, as it happens, a knitter.) I stuck around because I find it refreshing to be able to pick apart ideas without getting written off as too brainy or too cold, which tends to happen in the rest of my life.
Sorry for the lack of persecution—you seem to have been hoping for it.
Very glad not to be persecuted, actually. Yay!
Welcome to LessWrong!
Do we? Do you hate Hindus, or do you just think they’re wrong?
One thing I slightly dislike about “internet atheists” is the exclusive focus on religion as a source of all that’s wrong in the world, whereas you get very similar forms of irrationality in partisan politics or nationalism. I’m not alone in holding that view—see this for some related ideas. At best, religion can be about focusing human’s natural irrationality in areas that don’t matter (cosmology instead of economics), while facilitating morality and cooperative behavior. I understand that some Americans atheists are more hostile to religion than I am (I’m French, religion isn’t a big issue here, except for Islam), because they have to deal with religious stupidity on a daily basis.
Note that a Mormon wrote a series of posts that was relatively well received, so you may be overestimating LessWrong’s hostility to religion.
Technically, it’s “Christianity” that some of us don’t like very much. Many of us live in countries where people who call themselves “Christians” compose much of the population, and going around hating everyone we see won’t get us very far in life. We might wish that they weren’t Christians, but while we’re dreaming we might as well wish for a pony, too.
And, no, we don’t ban people for saying that they’re Christians. It takes a lot to get banned here.
Well, so far you haven’t given us much of a reason to want you gone. Also, people who call themselves atheists usually don’t really care whether or not you “hate God” any more than we care about whether you “hate Santa Claus”.
Because you feel you have something you want to say?
Do you want a pony?
Can I have a kitty instead?
Amusingly, one of the things I’ve found after becoming a brony is that I mentally edit “wish for a pony” to “wish to be a pony.”
No pony for you
Hi, AspiringKnitter!
There have been several openly religious people on this site, of varying flavours. You don’t (or shouldn’t) get downvoted just for declaring your beliefs; you get downvoted for faulty logic, poor understanding and useless or irrelevant comments. As someone who stopped being religious as a result of reading this site, I’d love for more believers to come along. My impulse is to start debating you right away, but I realise that’d just be rude. If you’re interested, though, drop me a PM, because I’m still considering the possibility I might have made the wrong decision.
The evaporative cooling risk is worrying, now that you mention it… Have you actually noticed that happening here during your lurking days, or are you just pointing out that it’s a risk?
Oh, and dedicating an entire paragraph to musing about the downvotes you’ll probably get, while an excellent tactic for avoiding said downvotes, is also annoying. Please don’t do that.
Uh-oh. LOL.
Normally, I’m open to random debates about everything. I pride myself on it. However, I’m getting a little sick of religious debate since the last few days of participating in it. I suppose I still have to respond to a couple of people below, but I’m starting to fear a never-ending, energy-sapping, GPA-sabotaging argument where agreeing to disagree is literally not an option. It’s my own fault for showing up here, but I’m starting to realize why “agree to disagree” was ever considered by anyone at all for anything given its obvious wrongness: you just can’t do anything if you spend all your time on a never-ending argument.
Haven’t been lurking long enough.
In the future I will not. See below. Thank you for calling me out on that.
Talk of Aumann Agreement notwithstanding, the usual rules of human social intercourse that allow “I am no longer interested in continuing this discussion” as a legitimate conversational move continue to apply on this site. If you don’t wish to discuss your religious beliefs, then don’t.
Ah, I didn’t know that. I’ve never had a debate that didn’t end with “we all agree, yay”, some outside force stopping us or everyone hating each other and hurling insults.
Jeez. What would “we all agree, yay” even look like in this case?
I suppose either I’d become an atheist or everyone here would convert to Christianity.
The assumption that everyone here is either an atheist or a Christian is already wrong.
Good point. Thank you for pointing it out.
There are additional possibilities, like everyone agreeing on agnosticism or on some other religion.
Can I vote Discordianism? Knowing how silly it all is is a property of the text, Isn’t that helpful?
Hm.
So, if I’m understanding you, you considered only four possible outcomes likely from your interactions with this site: everyone converts to Christianity, you get deconverted from Christianity, the interaction is forcibly stopped, or the interaction degenerates to hateful insults. Yes?
I’d be interested to know how likely you considered those options, and if your expectations about likely outcomes have changed since then.
Well, for any given conversation about religion, yes. (Obviously, I expect different things if I post a comment about HP:MoR on that thread.)
I expected the last one, since mostly no matter what I do, internet discussions on anything important have a tendency to do that. (And it’s not just when I’m participating in them!) I considered any conversions highly unlikely and didn’t really expect the interaction to be stopped.
My expectations have changed a lot. After a while I realized that hateful insults weren’t happening very much here on Less Wrong, which is awesome, and that the frequency didn’t seem to increase with the length of the discussion, unlike other parts of the internet. So I basically assumed the conversation would go on forever. Now, having been told otherwise, I realize that conversations can actually be ended by the participants without one of these things happening.
That was a failure on my part, but would have correctly predicted a lot of the things I’d experienced in the past. I just took an outside view when an inside view would have been better because it really is different this time. That failure is adequately explained by the use of the outside view heuristic, which is usually useful, and the fact that I ended up in a new situation which lacked the characteristics that caused what I observed in the past.
Beliefs should all be probabilistic.
I think this rules out some and only some branches of Christianity, but more importantly it impels accepting behaviorist criteria for any difference in kind between “atheists” and “Christians” if we really want categories like that.
There isn’t a strong expectation here that people should never agree to disagree—see this old discussion, or this one.
That being said, persistent disagreement is a warning sign that at least one side isn’t being perfectly rational (which covers both things like “too attached to one’s self-image as a contrarian” and like “doesn’t know how to spell out explicitly the reasons for his belief”).
I tried to look for a religious debate elsewhere in this thread but could not find any except the tangential discussion of schizophrenia.
Then please feel free to ignore this comment. On the other hand, if you ever feel like responding then by all means do.
A lack of response to this comment should not be considered evidence that AspiringKnitter could not have brilliantly responded.
What is the primary reason you believe in God and what is the nature of this reason?
By nature of the reason, I mean something like these:
inductive inference: you believe adding a description of whatever you understand of God leads to a simpler explanation of the universe without losing any predictive power
intuitive inductive inference: you believe in god because of intuition. you also believe that there is an underlying argument using inductive inference, you just don’t know what it is
intuitive metaphysical: you believe in god because of intuition. you believe there is some other justification this intuition works
It’s weird, but I can’t seem to find everything on the thread from the main post no matter how many of the “show more comments” links I click. Or maybe it’s just easy to get lost.
None of the above, and this is going to end up on exactly (I do mean exactly) the same path as the last one within three posts if it continues. Not interested now, maybe some other time. Thanks. :)
See here.
I don’t think you’ll be actively hated here by most posters (and even then, flamewars and trolling here are probably not what you’d expect from most other internet spaces)
I wouldn’t read polyamory as a primary shared feature of the posters here—and this is speaking as someone who’s been poly her entire adult life. Compared to most mainstream spaces, it does come up a whole lot more, and people are generally unafraid of at least discussing the ins and outs of it.
(I find it hard to imagine how you could manage real immortality in a universe with a finite lifespan, but that’s neither here nor there.)
You have to do a lot weirder or more malicious than that to get banned here. I frequently argue inarticulately for things that are rather unpopular here, and I’ve never once gotten the sense that I would be banned. I can think of a few things that I could do that would get me banned, but I had to go looking.
You won’t be banned, but you will probably be challenged a lot if you bring your religious beliefs into discussions because most of the people here have good reasons to reject them. Many of them will be happy to share those with you, at length, should you ask.
The people here mostly don’t think the God you believe in is a real being that exists, and have no interest in making you hate your deity. For us it would be like making someone hate Winnie the Pooh—not the show or the books, but the person. We don’t think there’s anything there to be hated.
I’m going to guess it’s because you’re curious, and you’ve identified LW as a place where people who claim to want to do some pretty big, even profound things to change the world hang out (as well as people interested in a lot of intellectual topics and skills), and on some level that appeals to you?
And I’d further guess you feel like the skew of this community’s population makes you nervous that some of them are talking about changing the world in ways that would affect everybody whether or not they’d prefer to see that change if asked straight up?
I think I just found my new motto in life :-)
I personally am an atheist, and a fairly uncompromising one at that, but I still find this line a little offensive. I don’t hate all Christians. Many (or probably even most) Christians are perfectly wonderful people; many of them are better than myself, in fact. Now, I do believe that Christians are disastrously wrong about their core beliefs, and that the privileged position that Christianity enjoys in our society is harmful. So, I disagree with most Christians on this topic, but I don’t hate them. I can’t hate someone simply for being wrong, that just makes no sense.
That said, if you are the kind of Christian who proclaims, in all seriousness, that (for example) all gay people should be executed because they cause God to send down hurricanes—then I will find it very, very difficult not to hate you. But you don’t sound like that kind of a person.
If you can call down hurricanes, tell me and I’ll revise my beliefs to take that into account. (But then I’d just be in favor of deporting gays to North Korea or wherever else I decide I don’t like. What a waste to execute them! It could also be interesting to send you all to the Sahara, and by interesting I mean ecologically destructive and probably a bad idea not to mention expensive and needlessly cruel.) As long as you’re not actually doing that (if you are, please stop), and as long as you aren’t causing some other form of disaster, I can’t think of a good reason why I should be advocating your execution.
Calling down hurricanes is easy. Actually getting them to come when you call them is harder. :)
Much like spirits from the vasty deep.
Sadly, I myself do not possess the requisite sexual orientation, otherwise I’d be calling down hurricanes all over the place. And meteorites. And angry frogs ! Mwa ha ha !
Bugmaster, I call down hurricanes everyday. It never gets boring. Meteorites are a little harder, but I do those on occasion. They aren’t quite as fun.
But the angry frogs?
The angry frogs?
Those don’t leave a shattered wasteland behind, so you can just terrorize people over and over again with those. Just wonderful.
Note: All of the above is complete bull-honkey. I want this to be absolutely clear. 100%, fertilizer-grade, bull-honkey.
If I had a smartphone, I could call down Angry Birds on people. Well, on pigs at least.
EY has read With Folded Hands and mentioned it in his CEV writeup as one more dystopia to be averted. This task isn’t getting much attention now because unfriendly AI seems to be more probable and more dangerous than almost-friendly AI. Of course we would welcome any research on preventing almost-friendly AI :-)
Or creating it. That might be good too.
The act or the research?
Either. The main reason creating almost-Friendly AI isn’t a concern is that it’s believed to be practically as hard as creating Friendly AI. Someone who tries to create a Friendly AI and fails creates an Unfriendly AI or no AI at all. And almost-Friendly might be enough to keep us from being hit by meteors and such.
I’m struggling with where the line lies.
I think pretty much everyone would agree that some variety of “makes humanity extinct by maximizing X” is unfriendly.
If however we have “makes bad people extinct by maximizing X and otherwise keeps P-Y of humanity alive” is that still unfriendly?
What about “leaves the solar system alone but tiles the rest of the galaxy” is that still unfriendly?
Can we try to close in on where the line is between friendly and unfriendly?
I really don’t believe we have NOT(FAI) = UFAI.
I believe it’s the other way around i.e. NOT(UFAI) = FAI.
Are you using some nonstandard logic where these statements are distinct?
In the real world if I believe that “anyone who isn’t my enemy is my friend” and you believe that “anyone who isn’t my friend is my enemy”, we believe different things. (And we’re both wrong: the truth is some people are neither my friends nor my enemies.) I assume that’s what xxd is getting at here. I think it would be more precise for xxd to say “I don’t believe that NOT(FAI) is a bad thing that we should be working to avoid. I believe that NOT(UFAI) is a good thing that we should be working to achieve.”
In this xxd does in fact disagree with the articulated LW consensus, which is that the design space of human-created AI is so dangerous that if an AI isn’t provably an FAI, we ought not even turn it on… that any AI that isn’t Friendly constitutes an existential risk.
Xxd may well be wrong, but xxd is not saying something incoherent here.
Can you explain what those things are? I can’t see the distinction. The first follows necessarily from the second, and vice-versa.
Consider three people: Sam, Ethel, and Doug.
I’ve known Sam since we were kids together, we enjoy each others’ company and act in one another’s interests. I’ve known Doug since we were kids together, we can’t stand one another and act against one another’s interests. I’ve never met Ethel in my life and know nothing about her; she lives on the other side of the planet and has never heard of me.
It seems fair to say that Sam is my friend, and Doug is my enemy. But what about Ethel?
If I believe “anyone who isn’t my enemy is my friend,” then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another’s interests? No, we do not. Thus we aren’t enemies… and it follows from my belief that Ethel is my friend.
If I believe “anyone who isn’t my friend is my enemy,” then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another’s interests? No, we do not. Thus we aren’t friends… and it follows from my belief that Ethel is my enemy.
I think it more correct to say that Ethel is neither my friend nor my enemy. Thus, I consider Ethel an example of someone who isn’t my friend, and isn’t my enemy. Thus I think both of those beliefs are false. But even if I’m wrong, it seems clear that they are different beliefs, since they make different predictions about Ethel.
Thanks—that’s interesting.
It seems to me that this analysis only makes sense if you actually have the non-excluded middle of “neither my friend nor my enemy”. Once you’ve accepted that the world is neatly carved up into “friends” and “enemies”, it seems you’d say “I don’t know whether Ethel is my friend or my enemy”—I don’t see why the person in the first case doesn’t just as well evaluate Ethel for friendhood, and thus conclude she isn’t an enemy. Note that one who believes “anyone who isn’t my enemy is my friend” also should thus believe “anyone who isn’t my friend is my enemy” as a (logically equivalent) corollary.
Am I missing something here about the way people talk / reason? I can’t really imagine thinking that way.
Edit: In case it wasn’t clear enough that they’re logically equivalent:
Edit: long proof was long.
¬Fx → Ex ≡ Fx ∨ Ex ≡ ¬Ex → Fx
I’m guessing that the difference in the way language is actually used is a matter of which we are being pickier about, and which happens “by default”.
Yes, I agree that if everyone in the world is either my friend or my enemy, then “anyone who isn’t my enemy is my friend” is equivalent to “anyone who isn’t my friend is my enemy.”
But there do, in fact, exist people who are neither my friend nor my enemy.
If “everyone who is not my friend is my enemy”, then there does not exist anyone who is neither my friend nor my enemy. You can therefore say that the statement is wrong, but the statements are equivalent without any extra assumptions.
ISTM that the two statements are equivalent denotationally (they both mean “each person is either my friend or my enemy”) but not connotationally (the first suggests that most people are my friends, the latter suggests that most people are my enemies).
It’s equivocation fallacy.
In other words, there are things that are friends. There are things that are enemies. It takes a separate assertion that those are the only two categories (as opposed to believing something like “some people are indifferent to me”).
In relation to AI, there is malicious AI (the Straumli Perversion), indifferent AI (Accelerando AI), and FAI. When EY says uFAI, he means both malicious and indifferent. But it is a distinct insight to say that indifferent AI are practically as dangerous as malicious AI. For example, it is not obvious that an AI whose only goal is to leave the Milky Way galaxy (and is capable of trying without directly harming humanity) is too dangerous to turn on. Leaving aside the motivation for creating such an entity, I certainly would agree with EY that such an entity has a substantial chance of being an existential risk to humanity.
This seems mostly like a terminological dispute. But I think AI that doesn’t care about humanity (i.e the various AI in Accelerando) are best labeled unfriendly even though they are not trying to end humanity or kill any particular human.
I can’t imagine a situation in which the AGI is sort-of kind to us—not killing good people, letting us keep this solar system—but which also does some unfriendly things, like killing bad people or taking over the rest of the galaxy (both pretty terrible things in themselves, even if they’re not complete failures), unless that’s what the AI’s creator wanted—i.e. the creator solved FAI but managed to, without upsetting the whole thing, include in the AI’s utility function terms for killing bad people and caring about something completely alien outside the solar system. They’re not outcomes that you can cause by accident—and if you can do that, then you can also solve full FAI, without killing bad people or tiling the rest of the galaxy.
I don’t see why things of this form can’t be in the set of programs that I’d label “FAI with a bug”
Can I say “LOL” without being downvoted?
I guess what I’m saying is that we’ve gotten involved in a compression fallacy and are saying that Friendly AI = AI that helps out humanity (or is kind to humanity—insert favorite “helps” derivative here).
Here’s an example: I’m “sort of friendly” in that I don’t actively go around killing people, but neither will I go around actively helping you unless you want to trade with me. Does that make me unfriendly? I say no it doesn’t.
Well, I don’t suppose anyone feels the need to draw a bright-line distinction between FAI and uFAI—the AI is more friendly the more its utility function coincides with your own. But in practice it doesn’t seem like any AI is going to fall into the gap between “definitely unfriendly” and “completely friendly”—to create such a thing would be a more fiddly and difficult engineering problem than just creating FAI. If the AI doesn’t care about humans in the way that we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about.
EDIT: Actually, thinking about it, I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate. I’m not sure how fast this goes wrong or in what way, but it doesn’t strike me as a good idea.
Conscious or unconscious volition? I think I can point to one possible failure mode :)
“I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate”
i.e. the device has to judge the usefulness by some metric and then decide to execute someone’s volition or not.
That’s exactly what my issue is with trying to define a utility function for the AI. You can’t. And since some people will have their utility function denied by the AI then who is to choose who get’s theirs executed?
I’d prefer to shoot for a NOT(UFAI) and then trade with it.
Here’s a thought experiment:
Is a cure for cancer maximizing everyone’s utility function?
Yes on average we all win.
BUT
Companies who are currently creating drugs to treat the symptoms of cancer and their employees would be out of business.
Which utility function should be executed? Creating better cancer drugs to treat the symptoms and then allowing the company to sell them, or put the companies out of business and cure cancer.
Well, that’s an easy question: if you’ve worked sixteen hour days for the last forty years and you’re just six months away from curing cancer completely and you know you’re going to get the Nobel and be fabulously wealthy etc. etc. and an alien shows up and offers you a cure for cancer on a plate, you take it, because a lot of people will die in six months. This isn’t even different to how the world currently is—if I invented a cure for cancer it would be detrimental to all those others who were trying to (and who only cared about getting there first) - what difference does it make if an FAI helps me? I mean, if someone really wants to murder me but I don’t want them to and they are stopped by the police, that’s clearly an example of the government taking the side of my utility function over the murderer’s. But so what? The murderer was in the wrong.
Anyway, have you read Eliezer’s paper on CEV? I’m not sure that I agree with him, but he does deal with the problem you bring up.
More friendly to you. Yes.
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
But I dispute the position that “if an AI doesn’t care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about”.
Consider: A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off. For us that’s an unfriendly AI.
One, however that doesn’t kill any of us but basically leaves us alone is defined by those of you who define “friendly AI” to be “kind to us”/”doing what we all want”/”maximizing our utility functions” etc is not unfriendly because by definition it doesn’t kill all of us.
Unless unfriendly also includes “won’t kill all of us but ignores us” et cetera.
Am I for example unfriendly to you if I spent my next month’s paycheck on paperclips but did you no harm?
Well, no. If it ignores us I probably wouldn’t call it “unfriendly”—but I don’t really mind if someone else does. It’s certainly not FAI. But an AI does need to have some utility function, otherwise it does nothing (and isn’t, in truth, intelligent at all), and will only ignore humanity if it’s explicitly programmed to. This ought to be as difficult an engineering problem as FAI—hence why I said it “almost certainly takes us apart”. You can’t get there by failing at FAI, except by being extremely lucky, and why would you want to go there on purpose?
Yes, it would be a really bad idea to have a superintelligence optimise the world for just one person’s utility function.
“But an AI does need to have some utility function”
What if the “optimization of the utility function” is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?
Is it sentient if it sits in a corner and thinks to itself, running simulations but won’t talk to you unless you offer it a trade e.g. of some paperclips?
Is it possible that we’re conflating “friendly” with “useful but NOT unfriendly” and we’re struggling with defining what “useful” means?
If it likes sitting in a corner and thinking to itself, and doesn’t care about anything else, it is very likely to turn everything around it (including us) into computronium so that it can think to itself better.
If you put a threshold on it to prevent it from doing stuff like that, that’s a little better, but not much. If it has a utility function that says “Think to yourself about stuff, but do not mess up the lives of humans in doing so”, then what you have now is an AI that is motivated to find loopholes in (the implementation of) that second clause, because anything that can get an increased fulfilment of the first clause will give it a higher utility score overall.
You can get more and more precise than that and cover more known failure modes with their own individual rules, but if it’s very intelligent or powerful it’s tough to predict what terrible nasty stuff might still be in the intersection of all the limiting conditions we create. Hidden complexity of wishes and all that jazz.
Not everyone agrees with Eliezer on everything; this is usually not that explicit, but consider e.g. the number of people talking about relationships vs. the number of people talking about cryonics or FAI—LW doesn’t act, collectively, as if it really believes Eliezer is right. It does assume that there is no God/god/supernatural, though.
(Also, where does this idea of atheists hating God come from? Most atheists have better things to do than hang on /r/atheism!)
I got the idea from various posts where people have said they don’t even like the Christian God if he’s real (didn’t someone say he was like Azathoth?) and consider him some kind of monster.
I can see I totally got you guys wrong. Sorry to have underestimated your niceness.
For my own part, I think you’re treating “being nice” and “liking the Christian God” and “hating Christians” and “wanting other people to hate God” and “only wanting other people to hate God” and “forcibly exterminating all morality” and various other things as much more tightly integrated concepts than they actually are, and it’s interfering with your predictions.
So I suggest separating those concepts more firmly in your own mind.
Sort of related: The Two-Party Swindle and the concept of belief-as-identity.
To be fair, I’m sure a bunch of people here disapprove of some actions by the Christian God in the abstract (mostly Old Testament stuff, probably, and the Problem of Evil). But yeah, for the most part LWers are pretty nice, if a little idiosyncratic!
Azathoth (the “blind idiot god”) is the local metaphor for evolution—a pointless, monomaniacal force with vast powers but no conscious goal-seeking ability and thus a tendency to cause weird side-effects (such as human culture).
Azathoth is how Eliezer described the process of evolution, not how he described the christian god.
She’s possibly thinking about Cthulhu.
She’s talking about TGGP. (Edited to change the link from this one.)
As far as I can tell, that post also uses ‘Azathoth’ for evolution.
Edited; thanks.
Well, if there were an omnipotent Creator, I’d certainly have a few bones to pick with him/her/it...
Classic example of bikeshedding.
Well, I personally am one of those people who thinks that cryonics is currently not worth worrying about, and that the Singularity is unlikely to happen anytime soon (in astronomical terms). So, there exists at least one outlier in the Less Wrong hive mind...
Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn’t a very hive-mindey community, unless you count atheism.
(The singularity, yes, you’re very much in the minority with the most skeptical quartile expecting it in 2150)
Regarding cryonics, you’re right and I was wrong, so thanks !
But in the interest of pedantry I should point out that among those 96% who did not sign up, many did not sign up simply due to a lack of funds, and not because of any misgivings they have about the process.
It sounds like 96% did not sign up.
Er, right, sorry, pasted the wrong thing. Editing to fix.
I guess that’s what bikeshedding feels like.
If one reads the Bible as one would read any other fiction book, then IMO it’d be pretty hard to conclude that this “God” character is anything other than the villain of the story. This doesn’t mean that atheists “hate God”, no more than anyone could be said to “hate Voldemort”, of course—both of them are just evil fictional characters, no more and no less.
Christians, on the other hand, believe that a God of some sort actually does exist, and when they hear atheists talking about the character of “God” in fiction, they assume that atheists are in fact talking about the real (from the Christians’ point of view) God. Hence the confusion.
In my own experience, one hears the claim more often as “atheists hate religion” rather than “atheists hate god”. The likelihood of hearing it seems to correlate with how intolerant a brand of religiosity one is dealing with (I can’t think of an easy way to test that intuition empirically at the the moment), so I tend to attribute it to projection.
What do you aspire to knit?
Sweaters, hats, scarves, headbands, purses, everything knittable. (Okay, I was wrong below, that was actually the second-easiest post to answer.) Do you like knitting too?
Yes, I do. This year, I’m mostly doing small items, like scarves and hats.
Knitting is an over-learned skill for me, like driving, and requires very little thought. I like both the process and the result.
The ten people I care about most in the world all happen to be Christians—devout, sincere Christians at that.
Welcome! And congratulations for creating what’s probably the longest and most interesting introduction thread of all time (I haven’t read all the introductions threads, though).
I’ve read all your posts here. I now have to update my belief about rationality among christians: so long, the most “rational” I’d found turned out to be nothing beyond a repetitive expert in rationalization. Most others are sometimes relatively rational in most aspects of life, but choose to ignore the hard questions about the religion they profess (my own parents fall in this category). You seem to have clear thought, and will to rethink your ideas. I hope you stay around.
On a side note, as others already stated below, I think you misunderstand what Eliezer wants to do with FAI. I agree with what MixedNuts said here, though I would also recommend reading The Hidden Complexity of Wishes, if you haven’t yet. Eliezer is more sane than it seems at first, in my opinion.
PS: How are you feeling about the reception so far?
EDIT: Clarifying: I agree with what MixedNuts said in the third and fourth paragraphs.
I think I’ve gotten such a nice reception that I’ve also updated in the direction of “most atheists aren’t cruel or hateful in everyday life” and “LessWrong believes in its own concern for other people because most members are nice”.
The wish on top of that page is actually very problematic…
Oh, and do people usually upvote for niceness?
For a certain value of niceness, yes.
The ordinary standard of courtesy here is pretty high, and I don’t think you get upvotes for meeting it. You can get upvotes for being nice (assuming that you also include content) if it’s a fraught issue.
I’m not sure atheist LW users would be a good sample of “most atheists”. I’d expect there to be a sizeable fraction of people who are atheists merely as a form of contrarianism.
I don’t think that’s the case. I do think there are a good many people who are naturally contrarian, and use their atheism as a platform. There are also people who become atheists after having been mistreated in a religion, and they’re angry.
I’m willing to bet a modest amount that going from religious to atheist has little or no effect on how much time a person spends on arguing about religion, especially in the short run.
Well, IME in Italy people from the former Kingdom of the Two Sicilies are usually much more religious than people from the former Papal States and the latter are much more blasphemous, and I have plenty of reasons to believe it’s not a coincidence.
Yes, that was a part of the point of the article—people try to fully specify what they want, it gets this complex, and it’s still missing things; meanwhile, people understand what someone means when they say “I wish I was immortal.”
Well, they understand it about as well as the speaker does. It’s not clear to me that the speaker always knows what they mean.
Right—there’s no misunderstanding, because the complexity is hidden by expectations and all sorts of shared stuff that isn’t likely to be there when talking to a genie of the “sufficiently sophisticated AI” variety, unless you are very careful about making sure that it is. Hence, the wish has hidden complexity—the point (and title) of the article.
Upvoted for linking The Hidden Complexity of Wishes. If Eliezer was actually advocating adjusting people’s sex drives, rather than speculating as to the form a compromise might take, he wasn’t following his own advice.
Welcome to LessWrong. Our goal is to learn how to achieve our goals better. One method is to observe the world and update our beliefs based on what we see (You’d think this would be an obvious thing to do, but history shows that it isn’t so). Another method we use is to notice the ways that humans tend to fail at thinking (i.e. have cognitive bias).
Anyway, I hope you find those ideas useful. Like many communities, we are a diverse bunch. Each of our ultimate goals likely differs, but we recognize that the world is far from how any of us want it to be, and that what each of us wants is in roughly the same direction from here. In short, the extent to which we are an insular community is a failure of the community, because we’d all like to raise the sanity line. Thus, welcome to LW. Help us be better.
Welcome to Less Wrong.
I don’t think much people here hate Christians. At least I don’t. I’ll just speak for myself (even if I think my view is quite shared here) : I have a harsh view on religions themselves, believing they are mind-killing, barren and dangerous (just open an history book), but that doesn’t mean I hate the people who do believe (as long as they don’t hate us atheists). I’ve christian friends, and I don’t like them less because of their religion. I’m a bit trying to “open their mind” because I believe that knowing and accepting the truth makes you stronger, but I don’t push too much the issue either.
For the “that acts more like Eliezer thinks it should” part, well, the Coherent Extrapolated Volition of Eliezer is supposed to be coherent over the whole of humanity, not over himself. Eliezer is not trying to make an AI that’ll turn the world into his own paradise, but that’ll turn it into something better according to the common wishes of all (or almost all) of humanity. He may fail at it, but if it does, he’s more likely to tile the world with smiley faces then to turn it into its own paradise ;)
Upvote for courage, and I’d give a few more if I could. (Though you might consider rereading some of EY’s CEV posts, because I don’t think you’ve accurately summarized his intentions.)
I don’t hate Christians. I was a very serious one for most of my life. Practically everyone I know and care about IRL is Christian.
I don’t think LW deserves all the credit for my deconversion, but it definitely hastened the event.
Welcome!
Only one of those is really a reason for me to be nervous, and that’s because Christianity has done some pretty shitty things to my people. But that doesn’t mean we have nothing in common! I don’t want to act the way EY thinks I should, either. (At least, not merely because it’s him that wants it.)
If you look at the survey, notice you’re not alone. A minority, perhaps, but not entirely alone. I hope you hang around.
“Only one of those is really a reason for me to be nervous, and that’s because Christianity has done some pretty shitty things to my people.”
Oh, don’t be such a martyr. “My people...” please. You do not represent “your people” and you aren’t their authority.
Whoa, calm down.
I’m not claiming any such representation or authority. They’re my people only in the sense that all of us happen to be guys who like guys; they’re the group of people I belong to. I’m not even claiming martyrdom, because (not many) of these shitty things have explicitly happened to me. I’m only stating my own (and no one else’s) prior for how interactions between self-identified Christians and gay people tend to turn out.
The point has been missed. Deep breath, paper-machine.
Nearly any viewpoint is capable of and has done cruel things to others. No reason to unnecessarilly highlight this fact and dramatize the Party of Suffering. This was an intro thread by a newcomer—not a reason to point to you and “your” people. They can speak for themselves.
To the extent that you’re saying that the whole topic of Christian/queer relations was inappropriate for an intro thread, I would prefer you’d just said that. I might even agree with you, though I didn’t find paper-machine’s initial comment especially problematic.
To the extent that you’re saying that paper-machine should not treat the prior poor treatment of members of a group they belong to, by members of a group Y belongs to, as evidence of their likely poor treatment by Y, I simply disagree. It may not be especially strong evidence, but it’s also far from trivial.
And all the stuff about martyrdom and Parties of Suffering and who gets to say what for whom seems like a complete distraction.
Why berate him for doing just that, then? He’s expressing his prior: members of a reference class he belongs to are often singled out for mistreatment by members of a reference class that his interlocutor claims membership with. He does not appear to believe himself Ambassador of All The Gay Men, based on what he’s actually saying, nor to treat that class-membership as some kind of ontological primitive.
Unless, of course, it’s in an intro thread by a newcomer. ;)
I wonder how this comment got 7 upvotes in 9 minutes.
EDIT: Probably the same way this comment got 7 upvotes in 6 minutes.
LW has a bunch of bored Bayesians on Mondays. Same thing happened to your score, mate.
Though it’s made more impressive when you realize that the comment you respond to, and its grandparent, are the user’s only two comments, and they average 30 karma each. That’s a beautiful piece of market timing!
Still, I didn’t get who “my people” referred to (your fellow citizens?). “To us gay people” would have been clearer IMO.
Wow, thanks! I feel less nervous/unwelcome already!
Let me just apologize on behalf of all of us for whichever of the stains on our honor you’re referring to. It wasn’t right. (Which one am I saying wasn’t right?)
Yay for not acting like EY wants, I guess. No offense or anything, EY, but you’ve proposed modifications you want to make to people that I don’t want made to me already...
(I don’t know what I said to deserve an upvote… uh, thanks.)
I’m curious which modifications EY has proposed (specifically) that you don’t want made, unless it’s just generically the suggestion that people could be improved in any ways whatsoever and your preference is to not have any modifications made to yourself (in a “be true to yourself” manner, perhaps?) that you didn’t “choose”.
If you could be convinced that a given change to “who you are” would necessarily be an improvement (by your own standards, not externally imposed standards, since you sound very averse to such restrictions) such as “being able to think faster” or “having taste preferences for foods which are most healthy for you” (to use very primitive off-the-cuff examples), and then given the means to effect these changes on yourself, would you choose to do so, or would you be averse simply on the grounds of “then I wouldn’t be ‘me’ anymore” or something similar?
Being able to think faster is something I try for already, with the means available to me. (Nutrition, sleep, mental exercise, I’ve even recently started trying to get physical exercise.) I actually already prefer healthy food (it was a really SIMPLE hack: cut out junk food, or phase it out gradually if you can’t take the plunge all at once, and wait until your taste buds (probably actually some brain center) start reacting like they would have in the ancestral environment, which is actually by craving healthy food), so the only further modification to be done is to my environment (availability of the right kinds of stuff). So obviously, those in particular I do want.
However, I also believe that here lies the road to ableism. EY has already espoused a significant amount. For instance, his post about how unfair IQ is misses out on the great contributions made to the world by people with very low IQs. There’s someone with an IQ of, I think she said, 86 or so, who is wiser than I am (let’s just say I probably rival EY for IQ score). IQ is valid only for a small part of the population and full-scale IQ is almost worthless except for letting some people feel superior to others. I’ve spent a lot of time thinking about and exposed to people’s writings about disability and how there are abled people who seek to cure people who weren’t actually suffering and appreciated their uniqueness. Understanding and respect for the diversity of skills in the world is more important than making everyone exactly like anyone else.
The above said, that doesn’t mean I’m opposed in principle to eliminating problems with disability (nor is almost anyone who speaks out against forced “cure”). Just to think of examples, I’m glad I’m better at interacting with people than I used to be and wish to be better at math (but NOT at the expense of my other abilities). Others, with other disabilities, have espoused wishes for other things (two people that I can think of want an end to their chronic pain without feeling that other aspects of their issues are bad things or need fixed). I worry about EY taking over the world with his robots and not remembering the work of Erving Goffman and a guy whose book is someplace where I can’t glance at the spine to see his name. He may fall into any number of potential traps. He could impose modification on those he deems not intelligent enough to understand, even though they are (one person who strongly shaped my views on this topic has made a video about it called In My Language). I also worry that he could create nursing homes without fully understanding institutionalization and learned helplessness and why it costs less in the community anyway. And once he’s made it a ways down that road, he might be better than most at admitting mistakes, but it’s hard to acknowledge that you’ve caused that much suffering. (We see it all the time in parents who don’t want to admit what harm they’ve caused disabled children by misunderstanding.) And by looking only at the optimal typical person, he may miss out on the unique gifts of other configurations. (I am not in principle opposed to people having all the strengths and none of the weaknesses of multiple types. I’m becoming a bit like that in some areas on a smaller scale, but not fully, and I don’t think that in practice it will work for most people or work fully.)
Regarding what EY has proposed that I don’t want, on the catperson post (in a comment), EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn’t appeal to me at all. (Sorry, but I don’t WANT to want more sex. You probably won’t agree with this argument, but Jesus advocated celibacy for large swaths of the population, and should I be part of one of those, I’d rather it not be any harder. Should I NOT be in one of those swaths, it’s still important that I not be too distracted satisfying those desires, since I’ll have far more important things to do with my life.) But in a cooperative endeavor like that, who’s going to listen to me explaining I don’t want to change in the way that would most benefit them?
And that’s what I can think of off the top of my head.
By the middle of the second paragraph I was thinking “Whoa, is everyone an Amanda Baggs fan around here?”. Hole in one! I win so many Bayes-points, go me.
I and a bunch of LWers I’ve talked to about it basically already agree with you on ableism, and a large fraction seems to apply usual liberal instincts to the issue (so, no forced cures for people who can point to “No thanks” on a picture board). There are extremely interesting and pretty fireworks that go off when you look at the social model disability from a transhumanist perspective and I want to round up Alicorn and Anne Corwin and you and a bunch of other people to look at them closely. It doesn’t look like curing everyone (you don’t want a perfectly optimized life, you want a world with variety, you want change over time), and it doesn’t look like current (dis)abilities (what does “blind” mean if most people can see radio waves?), and it doesn’t look like current models of disability (if everyone is super different and the world is set up for that and everything is cheap there’s no such thing as accommodations), and it doesn’t look like the current structures around disability (if society and personal identity and memory look nothing like they started with “culture” doesn’t mean the same thing and that applies to Deaf culture) and it’s complicated and pretty and probably already in some Egan novel.
But, to address your central point directly: You are completely and utterly mistaken about what Eliezer Yudkowsky wants to do. He’s certainly not going to tell a superintelligence anything as direct and complicated as “Make this person smarter”, or even “Give me a banana”. Seriously, nursing homes?
If tech had happened to be easier, we might have gotten a superintelligence in the 16th century in Europe. Surely we wouldn’t have told it to care about the welfare of black people. We need to build something that would have done the right thing even if we had built it in the 16th century. The very rough outline for that is to tell it “Here are some people. Figure out what they would want if they knew better, and do that.”. So in the 16th century, it would have been presented with abled white men; figured out that if they were better informed and smarter and less biased and so on, these men would like to be equal to black women; and thus included black women in its next turn of figuring out what people want. Something as robust as this needs to be can’t miss an issue that’s currently known to exist and be worthy of debate!
And for the celibacy thing: that’s a bit besides the point, but obviously if you want to avoid sex for reasons other than low libido, increasing your libido obviously won’t fix the mismatch.
How do you identify what knowing better would mean, when you don’t know better yet?
The same way we do, but faster? Like, if you start out thinking that scandalous-and-gross-sex-practice is bad, you can consider arguments like “disgust is easily culturally trained so it’s a poor measure of morality”, and talk to people so you form an idea of what it’s like to want and do it as a subjective experience (what positive emotions are involved, for example), and do research so you can answer queries like “If we had a brain scanner that could detect brainwashing manipulation, what would it say about people who want that?”.
So the superintelligence builds a model of you and feeds it lots of arguments and memory tape from others and other kinds of information. And then we run into trouble because maybe you end up wanting different things depending on the order it feeds you it, or it tells you to many facts about Deep Ones and it breaks your brain.
Welcome!
This directly contradicts the mainstream research on IQ: see for instance this or this. If you have cites to the contrary, I’d be curious to read them.
That said, glad to see someone else who’s found In My Language—I ran across it many years ago and thought it beautiful and touching.
Yes, you’re right. That was a blatant example of availability bias—the tiny subset of the population for which IQ is not valid makes up a disproportionately large part of my circle. And I consider full-scale IQ worthless for people with large IQ gaps, such as people with learning disabilities, and I don’t think it conveys any new information over and above subtest scores in other people. Thank you for reminding me again how very odd I and my friends are.
But I also refer here to understanding, for instance, morality or ways to hack life, and having learned one of the most valuable lessons I ever learned from someone I’m pretty sure is retarded (not Amanda Baggs; it’s a young man I know), I know for a fact that some important things aren’t always proportional to IQ. In fact, specifically, I want to say I learned to be better by emulating him, and not just from the interaction, lest you assume it’s something I figured out that he didn’t already know.
I don’t have any studies to cite; just personal experience with some very abnormal people. (Including myself, I want to point out. I think I’m one of those people for whom IQ subtests are useful—in specific, limited ways—but for whom full-scale IQ means nothing because of the great variance between subtest scores.)
Her points on disability may still be valid, but it looks like the whole Amanda Baggs autism thing was a media stunt. At age 14, she was a fluent speaker with an active social life.
The page you link is kind of messy, but I read most of it. Simon’s Rock is real (I went there) and none of the details presented about it were incorrect (e.g. they got the name of the girls’ dorm right), but I’ve now poked around the rest of “Autism Fraud” and am disinclined to trust it as a source (the blogger sounds like a crank who believes that vaccines cause autism, and that chelation cures it, and he says all of this in a combative, nasty way). Do you have any other, more neutral sources about Amanda Baggs’s allegedly autism-free childhood? I’m sort of tempted to call up my school and ask if she’s even a fellow alumna.
.
This might interest you.
Certainly the author of that page seems very biased. Whether the writer of the letter is too, or whether the letter is real, I don’t know.
She couldn’t be called a neutral source by any stretch of the imagination, but Amanda herself (anbuend is Amanda Baggs) confirms that she went to college at 14 and that she was considered gifted. She also has a post up just to tell people that she has been able to speak.
Those posts put the allegations in more perspective and now I don’t feel like I ought to make a phone call. Thanks! I hate phones!
That would be very interesting.
Those of us who endorse respecting individual choices when we can afford to, because we prefer that our individual choices be respected when we can afford it.
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
More broadly: I mostly consider all of this “what would EY do” stuff a distraction; the question that interests me is what I ought to want done and why I ought to want it done, not who or what does it. If large-scale celibacy is a good idea, I want to understand why it’s a good idea. Being told that some authority figure (any authority figure) advocated it doesn’t achieve that. Similarly, if it’s a bad idea, I want to understand why it’s a bad idea.
Whatever-it-is-that-distinguishes-the-people-it-works-for seems to be inherent in the skills in question (that is, the configuration that brings about a certain ability also necessarily brings about a weakness in another area), so I don’t think that’s possible. If it were, I can only imagine it taking the form of people being able to shift configuration very rapidly into whatever works best for the situation, and in some cases, I find that very implausible. If I’m wrong, sure, why not? If it’s possible, it’s only the logical extension of teaching people to use their strengths and shore up their weaknesses. This being an inherent impossibility (or so I think; I could be wrong), it doesn’t so much matter whether I’m opposed to it or not, but yeah, it’s fine with me.
You make a good point, but I expect that assuming that someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky, so whether he would abuse that power is more important than whether my next-door neighbors would if they could or even what I would do, and so what EY wants is at least worth considering, because the failure mode if he does something bad is way too catastrophic.
What makes you think that?
For example, do you think he’s the only person working on building AI powerful enough to change the world?
Or that, of the people working on it, he’s the only one competent enough to succeed?
Or that, of the people who can succeed, he’s the only one who would “use” the resulting AI to rule the world and modify people?
Or something else?
He’s the only person I know of who wants to build an AI that will take over the world and do what he wants. He’s also smart enough to have a chance, which is disturbing.
Have you read his paper on CEV? To the best of my knowledge, that’s the clearest place he’s laid out what he wants an AGI to do, and I wouldn’t really label it “take over the world and do what [Eliezer Yudkowsky] wants” except for broad use of those terms to the point of dropping their typical connotations.
Don’t worry. We are in good hands. Eliezer understands the dillemas involved and will ensure that we can avoid non-friendly AI. The SI are dedicated to Friendly AI and the completion of their goal.
I can virtually guarantee you that he’s not the only one who wants to build such an AI. Google, IBM, and the heads of major three-letter government agencies all come to mind as the kind of players who would want to implement their own pet genie, and are actively working toward that goal. That said, it’s possible that EY is the only one who has a chance of success… I personally wouldn’t give him, or any other human, that much credit, but I do acknowledge the possibility.
Thank you. I’ve just updated on that. I now consider it even more likely that the world will be destroyed within my lifetime.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive. It seems likely that it would find some of the knowledge humanity has built up over the millenia useful, regardless of what specific goals it had. In that sense, I think that even if a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be much comfort, but how comforting or not it is is a matter of personal taste.
As I mentioned here, I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a variety of personal risks.
“I see that you’re trying to extrapolate human volition. Would you like some help ?” converts the Earth into computronium
Soreff was probably alluding to User:Clippy, someone role-playing a non-FOOMed paperclip maximiser.
Though yours is good too :-)
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that has FOOMed that really represents the threat.
Ah, thanks, that makes sense.
Yeah, this is Eliezer inferring too much from the most-accessible information about sex drive from members of his tribe, so to speak—it’s not so very long ago in the West that female sex drive was perceived as insatiable and vast, with women being nearly impossible for any one man to please in bed; there are still plenty of cultures where that’s the case. But he’s heard an awful lot of stories couched in evolutionary language about why a cultural norm in his society that is broadcast all over the place in media and entertainment reflects the evolutionary history of humanity.
He’s confused about human nature. If Eliezer builds a properly-rational AI by his own definitions to resolve the difficulty, and it met all his other stated criteria for FAI, it would tell him he’d gotten confused.
Well, there do seem to be several studies, including at least one cross-cultural study, that support the “the average female sex drive is lower” theory.
These studies also rely on self-reported sexual feelings and behavior, as reported by the subset of the population willing to volunteer for such a study and answer questions such as “How often do you masturbate?”, and right away you’ve got interference from “signalling what you think sounds right”, “signalling what you’re willing to admit,” “signalling what makes you look impressive”, and “signalling what makes you seem good and not deviant by the standards of your culture.” It is notoriously difficult to generalize such studies—they best serve as descriptive accounts, not causal ones.
Many of the relevant factors are also difficult to pin down; testosterone clearly has an affect, but it’s a physiological correlate that doesn’t suffice to explain the patterns seen (which again, are themselves to be taken with a grain of salt, and not signalling anything causal). . The jump to a speculative account of evolutionary sexual strategies is even less warranted. For a good breakdown, see here: http://www.csun.edu/~vcpsy00h/students/sexmotiv.htm
These are valid points, but you said that there still exist several cultures where women are considered to be more sexual than men. Shouldn’t they then show up in the international studies? Or are these cultures so rare as to not be included in the studies?
Also, it occurs to me that whether or not the differences are biological is somewhat of a red herring. If they are mainly cultural, then it means that it will be easier for an FAI to modify them, but that doesn’t affect the primary question of whether they should be modified. Surely that question is entirely independent of the question of their precise causal origin?
An addendum: There’s also the “Ecological fallacy” to consider—where a dataset suggests that on the mean, a population A has property P and population B has P+5, but randomly selecting members of each population will give very different results due to differences in distribution.
Actually it’s entirely possible to miss a lot of detail while ostensibly sampling broadly. If you sample citizens in Bogota, Mumbai, Taibei, Kuala Lumpur, Ashgabat, Cleveland, Tijuana, Reykjavik, London, and Warsaw, that’s pretty darn international and thus a good cross-cultural representation of humanity, right? Surely any signals that emerge from that dataset are probably at least suggestive of innate human tendency?
Well, actually, no. Those are all major cities deeply influenced and shaped by the same patterns of mercantile-industrialist economics that came out of parts of Eurasia and spread over the globe during the colonial era and continue to do so—and that influence has worked its way into an awful lot of everyday life for most of the people in the world. It would be like assuming that using wheels is a human cultural universal, because of their prevalence.
An even better analogy here would be if you one day take a bit of plant tissue and looking under a microcoscope, spot the mitochondria. Then you find the same thing in animal tissue. When you see it in fungi, too, you start to wonder. You go sampling and sampling all the visible organisms you can find and even ones from far away, and they all share this trait. It’s only Archeans and Bacteria that seem not to. Well, in point of fact there are more types of those than of anything else, significantly more varied and divergent than the other organisms you were looking at put together. It’s not a basal condition for living things, it’s just a trait that’s nearly universal in the ones you’re most likely to notice or think about. (The break in the analogy being that mitochondria are a matter of ancestry and subsequent divergence, while many of the human cultural similarities you’d observe in my above example are a matter of alternatives being winnowed and pushed to the margins, and existing similarities amplified by the effects of a coopting culture-plex that’s come to dominate the picture).
It totally is, but my point was that Eliezer has expressed it’s a matter of biology, and if I’m correct in my thoughts he’s wrong about that—and in my understanding of how he feels FAI would behave, this would lead to the behavior I described (FAI explains to Eliezer that he’s gotten that wrong).
As I mentioned the last time this topic came up, there is evidence that giving supplementary testosterone to humans of either sex tends to raise libido, as many FTM trans people will attest, for example. While there is a lot of individual variation, expecting that on average men will have greater sex drive than women is not based purely on theory.
The pre-Victorian Western perception of female sexuality was largely defined by a bunch of misogynistic Cistercian monks, who, we can be reasonably confident, were not basing their conclusions on a lot of actual experience with women, given that they were cloistered celibates.
I don’t dispute the effects of testosterone; I just don’t think that sex drive is reducible to that, and I tend to be suspicious when evolutionary psychology is proposed for what may just as readily be explained as culture-bound conditions.
It’s not just the frequency of the desire to copulate that matters, after all—data on relative “endurance” and ability to go for another round, certain patterns of rates and types of promiscuity, and other things could as readily be construed to provide a very different model of human sexual evolution, and at the end of the day it’s a lot easier to come up with plausible-sounding models that accord pretty well with one’s biases than be certain we’ve explored the actual space of evolutionary problems and solutions that led to present-day humanity.
I tend to think that evolutionary psychological explanations need to meet the threshold test that they can explain a pattern of behavior better than cultural variance can; biases and behaviors being construed as human nature ought to be based on clearly-defined traits that give reliable signals, and are demonstrable across very different branches of the human cultural tree.
Look at it this way—would you agree to trade getting a slightly higher sex drive, in exchange for living in a world where rape, divorce, and unwanted long-term celibacy (“forever alone”) are each an order of magnitude rarer than they are in our world?
(That is assuming that such a change in sex drive would have those results, which is far from certain.)
This is an unfair question. If we do the Singularity right, nobody has to accept unwanted brain modifications in order to solve general societal problems. Either we can make the brain modifications appealing via non-invasive education or other gentle means, or we can skip them for people who opt out/don’t opt in. Not futzing with people’s minds against their wills is a pretty big deal! I would be with Aspiring Knitter in opposing a population-wide forcible nudge to sex drive even if I bought the exceptionally dubious proposition that such a drastic measure would be called for to fix the problems you list.
I didn’t mean to imply forcing unwanted modifications on everybody “for their own good”—I was talking about under what conditions we might accept things we don’t like (I don’t think this is a very plausible singularity scenario, except as a general “how weird things could get”).
I don’t like limitations on my ability to let my sheep graze, but I may accept them if everyone does so and it reduces overgrazing. I may not like limits on my ability to own guns, but I may accept them if it means living in a safer society. I may not like modifications to my sex drive, but I may be willing to agree in exchange for living in a better society.
In principle, we could find ways of making everybody better off. Of course, the details of how such an agreement is reached matter a lot—markets, democracy, competition between countries, a machine-God enforcing it’s will.
Since when is rape motivated primarily by not getting laid? (Or divorce, for that matter?)
But never mind. We have different terminal values here. You—I assume—seek a lot of partners for everyone, right? At least, others here seem to be non-monogamous. You won’t agree with me, but I believe in lifelong monogamy or celibacy, so while increasing someone’s libido could be useful in your value system, it almost never would in mine. Further, it would serve no purpose for me to have a greater sex drive because I would respond by trying to stifle it, in accordance with my principles. I hope you at least derive disutility from making someone uncomfortable.
Seriously, the more I hear on LessWrong, the more I anticipate having to live in a savage reservation a la Brave New World. But pointing this out to you doesn’t change your mind because you value having most people be willing to engage in casual sex (am I wrong here? I don’t know you, specifically).
I can’t speak for Emile, but my own views look something like this:
I see nothing wrong with casual sex (as long as all partners fully consent, of course), or any other kind of sex in general (again, assuming fully informed consent).
Some studies (*) have shown that humans are generally pretty poor at monogamy.
People whose sex drives are unsatisfied often become unhappy.
In light of this, forcing monogamy on people is needlessly oppressive, and leads to unnecessary suffering.
Therefore, we should strive toward building a society where monogamy is not forced upon people, and where people’s sex drives are generally satisfied.
Thus, I would say that I value “most people being able to engage in casual sex”. I make no judgement, however, whether “most people should be willing to engage in casual sex”. If you value monogamy, then you should be able to engage in monogamous sex, and I can see no reason why anyone could say that your desires are wrong.
(*) As well as many of our most prominent politicians. Heh.
I’m glad I actually asked, then, since I’ve learned something from your position, which is more sensible than I assumed. Upvoted because it’s so clearly laid out even though I don’t agree.
Thanks, I appreciate it. I am still interested in hearing why you don’t agree, but I understand that this can be a sensitive topic...
Oh, sorry, I thought that was obvious. Illusion of transparency, I guess. God says we should be monogamous or celibate. Of course, I doubt it’d be useful to go around trying to police people’s morals.
Sorry, where does God say this? You are a Christian right? I’m not aware of any verse in either the OT or NT that calls for monogamy. Jacob has four wives, Abraham has two, David has quite a few and Solomon has hundreds. The only verses that seem to say anything negative in this regard are some which imply that Solomon just has way too many. The text strongly implies that polyandry is not ok but polygyny is fine. The closest claim is Jesus’s point about how divorcing one woman and then marrying another is adultery, but that’s a much more limited claim (it could be that the other woman was unwilling to be a second wife for example). 1 Timothy chapter 3 lists qualifications for being a church leader which include having only one wife. That would seem to imply that having more than one wife is at worst suboptimal.
That is a really good point. (Actually, Jesus made a stronger point than that: even lusting after someone you’re not married to is adultery.)
You know, you could actually be right. I’ll have to look more carefully. Maybe my understanding has been biased by the culture in which I live. Upvoted for knowledgeable rebuttal of a claim that might not be correct.
Is that something like “Plan to take steps to have sex with the person”, or like “Experience a change in your pants”? (Analogous question for the “no coveting” commandment, too.) Because if you think some thoughts are evil, you really shouldn’t build humans with a brain that automatically thinks them. At least have a little “Free will alert: Experience lust? (Y/n)” box pop up.
In addition to what APMason said, I think that many Christians would disagree with your second statement:
Some of them are campaigning right now on the promise that they will “police people’s morals”…
I don’t really know if I should say this—whether this is the place, or if the argument’s moved well beyond this point for everyone involved, but: where and when did God say that, and if, as I suspect, it’s the Bible, doesn’t s/he also say we shouldn’t wear clothing of two different kinds of fibre at the same time?
Yes. That applies to the Jews but not to everyone else. You’re allowed to ignore Leviticus and Exodus if you’re not Jewish. EY probably knows this, since it’s actually Jewish theology (note that others have looked at the same facts and come to the conclusion that the rules don’t apply to anyone anymore and stopped applying when Jesus died, so take into account that someone (I don’t think it’s me) has done something wrong here, as per Aumann’s agreement theorem).
Well, I suppose what I should do is comb the Bible for some absurd commandment that does apply to non-Jews, but frankly I’m impressed by the loophole-exploiting nature of your reply, and am inclined to concede the point (also, y’know—researching the Bible… bleh).
EDIT: And by concede the point, I of course mean concede that you’re not locally inconsistent around this point, not that what you said about monogamy is true.
If you want Bible verses to use to dis Christianity, I suggest 1 Corinthians 14:33-35 and Luke 22:19, 20.
I’d be interested in your ideas of what books you’d recommend a non-Christian read.
The last time I entered into an earnest discussion of spirituality with a theist friend of mine, what I wanted to bend my brain around was how he could claim to derive his faith from studying the Bible, when (from the few passages I’ve read myself) it’s a text that absolutely does not stand literal interpretation. (For instance, I wanted to know how he reconciled an interest in science, in particular the science of evolution, with a Bible that literally argues for a “young Earth” incompatible with the known duration implied by the fossil and geological records.)
Basically I wanted to know precisely what his belief system consisted of, which was very hard given the many different conceptions of Christianity I bump into. I’ve read “Mere Christianity” on his advice, but I found it far from sufficient—at once way too specific on some points (e.g. a husband should be in charge in a household), and way too slippery on the fundamentals (e.g. what is prayer really about).
I’ve formed my beliefs from a combination of the Bible, asking other Christians, a cursory study of the secular history of the Roman Empire, internet discussions, articles and gut feelings.
That said, if you have specific questions about anything, feel free to ask me.
I’m curious what you think of evidence that early Christianity adopted the date of Christmas and other rituals from pre-existing pagan religions?
ETA: I’m not saying that this would detract from the central Christian message (i.e. Jesus sacrificing himself to redeem our sins). But that sort of memetic infection seems like a strange thing to happen to an objective truth.
I think it indicates that Christians have done stupid things and one must be discerning about traditions rather than blindly accepting everything taught in church as 100% true, and certainly not everything commonly believed by laypersons!
It’s not surprising (unless this is hindsight bias—it might actually BE surprising, considering how unwilling Christians should have been to make compromises like that, but a lot of time passed between Jesus’s death and Christianity taking over Europe, didn’t it?) that humans would be humans. I can see where I might have even considered the same in that situation—everyone likes holidays, everyone should be Christian, pagans get a fun solstice holiday, Christians don’t, this is making people want to be Christian less. Let’s fix it by having our own holiday. At least then we can make it about Jesus, right?
The worship and deification of Mary is similar, which is why I don’t pray to her.
That’s interesting.
So, suppose I find a church I choose (for whatever reason) to associate with. We seem to agree that I shouldn’t believe everything taught in that church, and I shouldn’t believe everything believed by members of that church… I should compare those teachings and beliefs to my own expectations about and experiences of the world to decide what I believe and what I don’t, just as you have used your own expectations about and experiences of human nature to decide whether to believe various claims about when Jesus was born, what properties Mary had, etc.
Yes? Or have I misunderstood you?
Yes. Upvoted for both understanding me and trying to avoid the illusion of transparency.
OK, cool.
So, my own experience of having compared the teachings and beliefs of a couple of churches I was for various reasons associated with to my own expectations about and experiences of the world was that, after doing so, I didn’t believe that Jesus was exceptionally divine or that the New Testament was a particularly reliable source of either moral truths or information about the physical world.
Would you say that I made an error in my evaluations?
Possibly. Or you may be lacking information; if your assumptions were wrong at the beginning and you used good reasoning, you’d come to the wrong conclusion.
Do you have particular assumptions in mind here? Or is this a more general statement about the nature of reasoning?
It’s a statement so general you probably learned it on your first day as a rationalist.
In other words, “Garbage in, garbage out?”
Yes.
Ehh… even when you don’t mean it literally, you probably shouldn’t say such things as “first day as a rationalist”. It’s kind of hard to increase one’s capability for rational thinking without keeping in mind at all times how it’s a many-sided gradient with more than one dimension.
Here’s one: Let’s say that the world is a simulation AND that strongly godlike AI is possible. To all intents and purposes, even though the bible in the simulation is provably inconsistent, the existence of a being indistinguishable from the God in such a bible would not be ruled out because though the inhabitants of the world are constrained by the rules of physics in their own state machines or objects or whatever, the universe containing the simulation is subject to it’s own set of physics and logic and therefore may vary even inside the simulation but not be detectable to you or I.
Yes of course this is possible. So is the Tipler scenario. However, the simulation argument just as easily supports any of a vast number of god-theories, of which Christianity is just one of many. That being said, it does support judeo-xian type systems more than say Hindiusm or Vodun.
There may even be economical reasons to create universes like ours, but that’s a very unpopular position on LW.
Come on. Don’t vote me down without responding.
How do you interpret Romans 13:8-10?
To me it seems straightforward. Instead of spelling out in detail what rules you should follow in a new situation—say, if the authorities who Paul just got done telling you to obey order you to do something ‘wrong’—this passage gives the general principle that supposedly underlies the rules. That way you can apply it to your particular situation and it’ll tell you all you need to do as a Christian. Paul does seem to think that in his time and place, love requires following a lot of odd rules. But by my reading this only matters if you plan to travel back in time (or if you personally plan to judge the dead).
But I gather that a lot of Christians disagree with me. I don’t know if I understand the objection—possibly they’d argue that we lack the ability to see how the rules follow from loving one’s neighbor, and thus we should expect God to personally spell out every rule-change. (So why tell us that this principle underlies them all?)
Using exegesis (meaning I’m not asking what it says in Greek or how else it might be translated, and I don’t think I need to worry much about cultural norms at the time). But that doesn’t tell you much.
Yes, I agree. Also, if you didn’t know what love said to do in your situation, the rules would be helpful in figuring it out.
That gets into a broader way of understanding the Bible. I don’t know enough about the time and place to talk much about this.
The objection I can think of is that people might want to argue in favor of being able to do whatever they want, even if it doesn’t follow from God’s commands, and not listen even to God’s explicit prohibitions. Hence, as a general principle, it’s better to obey the rules because more people who object to them (since the New Testament already massively reduces legalism anyway) will be trying to get away with violating the spirit of the rules than will be actually correct in believing that the spirit of the rules is best obeyed by violating the letter of them. Another point would be that if an omniscient being gives you a heuristic, and you are not omniscient, you’d probably do better to follow it than to disregard it.
Given that the context has changed, seems to me omniscience should only matter if God wants to prevent people other than the original audience from misusing or misapplying the rules. (Obviously we’d also need to assume God supplied the rules in the first place!)
Now this does seem like a fairly reasonable assumption, but doesn’t it create a lot of problems for you? If we go that route then it no longer suffices to show or assume that each rule made sense in historical context. Now you need to believe that no possible change would produce better results when we take all time periods into account.
Well, yeah, the first one’s kind of appalling, but the second one’s just kind of kinky.
I can save you some time here. Just look up “seven laws of Noah” or “Noahide laws”. That’s pretty much it for commandments that apply to non-Jews.
Note that the Noahide laws are the Jewish, not Christian interpretation of this distinction. And there are no sources mentioning them that go back prior to the Jewish/Christian split. (The relevant sections of Talmud are written no earlier than 300 CE.) There’s also some confusion over how those laws work. So for example, one of the seven Noahide prohibitions is the prohibition on illicit relations. But it isn’t clear which prohibited relations are included. There’s an opinion that this includes only adultery and incest and not any of the other Biblical sexual prohibitions (e.g. gay sex, marrying two sisters). There’s a decent halachic argument for something of this form since Jacob marries two sisters. (This actually raises a host of other halachic/theoloical problems for Orthodox Jews because many of them believe that the patriarchs kept all 613 commandments. But this is a further digression...)
And Jesus added the commandment not to lust after anyone you’re not married to and not to divorce.
And I would never have dreamed of the stupidity until someone did it, but someone actually interpreted metaphors from Proverbs literally and concluded that “her husband is praised at the city gates” actually means “women should go to the city limits and hold up signs saying that their husbands are awesome” (which just makes no sense at all). But that doesn’t count because it’s a person being stupid. For one thing, that’s descriptive, not prescriptive, and for another, it’s an illustration of the good things being righteous gets you.
As a semi-militant atheist, I feel compelled to point out that, from my perspective, all interpretations of Proverbs as a practical guide to modern life look about equally silly...
Upvoted for being the only non-Jew I’ve ever met to know that.
Really? Nearly everyone I grew up with was told that and I assume I wasn’t the only one to remember. I infer that either you don’t know many Christians, the subject hasn’t come up while you were talking to said Christians or Christian culture in your area is far more ignorant of their religious theory and tradition than they are here.
I’ve heard that some rules are specifically supposed to only apply to Jews,¹ and I think most Christians have heard that at some point in their lives, but I don’t think most of them remember having heard that, and very few to know that not wearing clothing of two different kinds of fibre at the same time is one such rules.
I remember Feynman’s WTF reaction in Surely You’re Joking to learning that Jews are not allowed to operate electric switches on Saturdays but they are allowed to pay someone else to do that.
There are different Jewish doctrinal positions on whether shabbos goyim—that is, non-Jews hired to perform tasks on Saturdays that Jews are not permitted to perform—are permissible.
And paying them on Saturday is always bad.
Pretty much a combination of all three. I live in a tiny bubble; occasionally I forget that and make stupid comments.
Do I get an upvote, too? I also know about what I should do if I want food I cook to be kosher (though I’m still a bit confused about food containing wheat).
I kew it too,. I thought it was common knowledge among those with any non-trival knowledge of non-folk Christian theology. Which admittedly isn’t a huge subset of the population, but isn’t that small in the west.
I want an upvote too for knowing that if I touch a woman who has her period then I am ‘unclean’. I don’t recall exactly what ‘unclean’ means. I think it’s like ‘cooties’.
Is this a sarcastic attempt to tell me that was a stupid reason to upvote someone? If so, I’ll retract it.
Not really, just playing along with MixedNuts talking about ridiculous Judeo-Christian rules. Vote up people for whatever you want.
Well, I’d lived in Israel for three years, and I did not know about these rules in this much detail, so I feel like I deserve some sort of a downvote :-(
The Catholic explanation for this one is that the pope had a dream about a goat piñata.
If that’s real, I want the whole story and references. If you made that up, I’m starting my own heresy around it.
Acts 10:9-16:
On the morrow, as they went on their journey, and drew nigh unto the city, Peter went up upon the housetop to pray about the sixth hour:
And he became very hungry, and would have eaten: but while they made ready, he fell into a trance,
And saw heaven opened, and a certain vessel descending upon him, as it had been a great sheet knit at the four corners, and let down to the earth:
Wherein were all manner of fourfooted beasts of the earth, and wild beasts, and creeping things, and fowls of the air.
And there came a voice to him, Rise, Peter; kill, and eat.
But Peter said, Not so, Lord; for I have never eaten any thing that is common or unclean.
And the voice spake unto him again the second time, What God hath cleansed, that call not thou common.
This was done thrice: and the vessel was received up again into heaven.
If you read the rest of the chapter it’s made clear that the dream is a metaphor for God’s willingness to accept Gentiles as Christians, rather than a specific message about acceptable foods, but abandoning kashrut presumably follows logically from not requiring new Christians to count as Jews first, so.
(Upon rereading this, my first impression is how much creepier slaughtering land animals seems as a metaphor for proselytism than the earlier “fishers of men” stuff; maybe it’s the “go, kill and eat” line or an easier time empathizing with mammals, Idunno. Presumably the way people mentally coded these things in first-century Palestine would differ from today.)
...a live goat piñata? Whoa.
Yes, sadly this isn’t the origin story for the Mexican piñata.
I’m glad Oligopsony provided the biblical reference; I learned about that one in Catechism class but couldn’t find the reference.
More sex does not have to mean more casual sex. There are lots of people in committed relationships (marriages) that would like to have more-similar sex drives. Nuns wouldn’t want their libido increased, but it’s not only for the benefit of the “playahs” either.
Also, I think the highest-voted comment (“I don’t think that any relationship style is the best (...) However, I do wish that people were more aware of the possibility of polyamory (...)”) is closer to the consensus than something like “everyone should have as many partners as much as possible”. LW does assume that polyamory and casual sex is optional-but-ok, though.
Hmm, that doesn’t sound right. I don’t want to make celibate people uncomfortable, I just want to have more casual sex myself. Also I have a weaker altruistic wish that people who aren’t “getting any” could “get some” without having to tweak their looks (the beauty industry) or their personality (the pickup scene). There could be many ways to make lots of unhappy people happier about sex and romance without tweaking your libido. Tweaking libido sounds a little pointless to me anyway, because PUA dogma (which I mostly agree with) predicts that people will just spend the surplus libido on attractive partners and leave unattractive ones in the dust, like they do today.
Well, some are. From the last survey:
Nope! I don’t have any certainty about what is best for society / mankind in the long run, but personally, I’m fine with monogamy, I’m married, have a kid, and don’t think “more casual sex” is necessarily a good thing.
I can, however, agree with Eliezer when he says it might be better if human sex drives were better adjusted—not because I value seeing more people screwing around like monkeys, but because it seems that the way things are now results in a great deal of frustration and unhappiness.
I don’t know about rape, but I expect that more sex drive for women and less for men would result in less divorces, because differences in sex drive are a frequent source of friction, as is infidelity (though it’s not clear that different sex drives would result in less infidelity). That’s not to say that hacking people’s brains is the only solution, or the best one.
I’m a married, monogamous person who would love to be able to adjust my sex drive to match my spouse’s (and I think we would both choose to adjust up).
The Twilight books do an interesting riff of the themes of eternal life, monogamy, and extremely high sex drives.
If enough feel similarly, and the discrepancy is real, the means will move toward each through voluntary shifts without forcing anything on anyone, incidentally.
What “voluntary shifts” do you mean? I agree that small shifts in sex drive are possible based on individual choice, but not large ones. Also, why do the means matter?
Ah, misunderstanding. I did not mean “shifts by volition alone”, but “voluntary as opposed to forced” as pertains to AspiringKnitter’s earlier worry about Yudkowsky forcing “some sort of compromise where we lowered male sex drive a little and increased female sex drive a little.”
If interpreted as a prediction rather than a recommendation, it might happen through individual choice if the ability to modify these things directly becomes sufficiently available (and sufficiently safe, and sufficiently accepted, &c) because of impulses like those you expressed: pairings that desire to be monogamous and who are otherwise compatible might choose to self modify to be compatible on this axis as well, and this will move the averages closer together.
Got it, thanks.
.
I think people’s intuitions about sex drives are interesting, because they seem to differ. Earlier we had a discussion where it became clear that some conceptualized lust as something like hunger—an active harm unless fulfilled—while I had always generalized from one example and assumed lust simpliciter pleasant and merely better when fulfilled. Of course it would be inconvenient for other things if it were constantly present, and were I a Christian of the right type the ideal level would obviously be lower, so this isn’t me at all saying you’re crazy and incomprehensible in some veiled way—I just think these kinds of implicit conceptual differences are interesting.
« EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn’t appeal to me at all. Sorry, but I don’t WANT to want more sex. » Ok, but would you agree to lowering males sex drive then ? Making it easier for those who want to follow a “no sex” path, and lowering the different between males and females in term of sex drive in the process ? Eliezer’s goal was to lower the difference between the desires of the two sex so they could both be happier. He proposed doing it by making them both go towards the average, but aligning to the lower of the two would fit the purpose too.
kilobug, Y U No quote using > ?!
Hrm… didn’t pay attention, sorry, I should indeed. Thanks for reminding me.
I am actually rather curious to hear more about your opinion on this topic. I personally would jump at the chance to become “better, stronger, faster” (and, of course, smarter), as long as doing so was my own choice. It is very difficult for me to imagine a situation where someone I trust tells me, for example, “this implant is 100% safe, cheap, never breaks down, and will make you think twice as fast, do you want it ?”, and I answer “no thanks”. You obviously disagree, so I’d love to hear your reasoning.
EDIT: Basically, what Cthulhoo said. Sorry Cthulhoo, I didn’t see your comment earlier, somehow.
Explained one example below.
I was under the impression that your example dealt with a compulsory modification (higher sex drive for all women across the board), which is something I would also oppose; that’s why I specified ”...as long as doing so was my own choice” in my comment. But I am under the impression—and perhaps I’m wrong about this—that you would not choose any sort of a technological enhancement of any of your capabilities. Is that so ? If so, why ?
No. I apologize for being unclear. EY has proposed modifications I don’t want, but that doesn’t mean every modification he supports is one I don’t want. I think I would be more skeptical than most people here, but I wouldn’t refuse all possible enhancements as a matter of principle.
No need to apologize, it was my fault for misunderstanding your position, and in fact it sounds like we agree !
I would be very interested in reading your opinion on this subject. There is sometimes a confirmation effect/death spiral inside the LW community, and it would be nice to be exposed to a completely different point of view. I may then modify my beliefs fully, in part or not at all as a consequence, but it’s valuable information for me.
I’ll bet US$1000 that this is Will_Newsome.
Why did you frame it that way, rather than that AspiringKnitter wasn’t a Christian, or was someone with a long history of trolling, or somesuch? It’s much less likely to get a particular identity right than to establish that a poster is lying about who they are.
Well, Newsome was a Catholic for a while at least! (Or something like one).
Wow. Now that you mention it, perhaps someone should ask AspiringKnitter what she thinks of dubstep...
Holy crap. I’ve never had a comment downvoted this fast, and I thought this was a pretty funny joke to boot. My mental estimate was that the original comment would end up resting at around +4 or +5. Where did I err?
I left it alone because I have absolutely no idea what you are talking about. Dubstep? Will likes, dislikes and/or does something involving dubstep? (Google tells me it is a kind of dance music.)
Explanation: Will once (in)famously claimed that watching certain dubstep videos would bolster some of your math intuitions.
(Er, well, math intuitions in a few specific fields, and only one or two rather specific dubstep videos. I’m not, ya know, actually crazy. The important thing is that that video is, as the kids would offensively say, “sicker than Hitler’s kill/death ratio”.) newayz I upvoted your original comment.
Do we count assists now?
And if so, who gets the credit for deaths by old age?
Post edited to reflect this, apologies for misrepresenting you.
I guess the subject is a bit touchy now.
That’s remarkably confident. This doesn’t really read like Newsome to me (and how would one find out with sufficient certainty to decide a bet for that much?).
Just how confident is it? It’s a large figure and colloquially people tend to confuse size of bet with degree of confidence—saying a bigger number is more of a dramatic social move. But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
Mitchell’s actual confidence is some unspecified figure between 0.5 and 1 and is heavily influenced by how overconfident he expects others to be.
This would only be true if money had linear utility value [1]. I, for example, would not take a $1000 bet at even odds even if I had 75% confidence of winning, because with my present financial status I just can’t afford to lose $1000. But I would take such a bet of $100.
The utility of winning $1000 is not the negative of the utility of losing $1000.
[1] or, to be precise, if it were approximately linear in the range of current net assets +/- $1000
From what I have inferred about Michael’s financial status the approximation seemed safe enough.
Fair enough in this case, but it’s important to avoid assuming that the approximation is universally applicable.
In a case with extremely asymmetric information like this one they actually are almost the same thing, since the only payoff you can reasonably expect is the rhetorical effect of offering the bet. Offering bets the other party can refuse and the other party has effectively perfect information about can only lose money (if money is the only thing the other party cares about and they act at least vaguely rationally).
Risk aversion and other considerations like gambler’s ruin usually mean that people insist on substantial edges over just >50%. This can be ameliorated by wealth, but as far as I know, Porter is at best middle-class and not, say, a millionaire.
So your points are true and irrelevant.
We obviously use the term ‘irrelevant’ to mean different things.
I have no idea who this Newsome character is, but I bet US$1 that there’s no easy way to implement the answer to the question,
without invading someone’s privacy, so I’m not going to play.
Agree on a trusted third party (gwern, Alicorn, NancyLebowitz … high-karma longtimers who showed up in this thread), and have AK call them on the phone, confirming details, then have the third party confirm that it’s not Will_Newsome.
… though the main problem would be, do people agree to bet before or after AK agrees to such a scheme?
How would gwern, Alicorn or NancyLebowitz confirm that anything I said by phone meant AspiringKnitter isn’t Will Newsome? They could confirm that they talked to a person. How could they confirm that that person had made AspiringKnitter’s posts? How could they determine that that person had not made Will Newsome’s posts?
At the very least, they could dictate an arbitrary passage (or an MD5 hash) to this person who claims to be AK, and ask them to post this passage as a comment on this thread, coming from AK’s account. This would not definitively prove that the person is AK, but it might serve as a strong piece of supporting evidence.
In addition, once the “AK” persona and the “WillNewsome” persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
The problem of determining a person’s identity on the Internet, and doing so in a reasonably safe way, is an interesting challenge. But in practice, I don’t really think it matters that much, in this case. I care about what the “AK” persona writes, not about who they are pretending not to be.
How about doing this already, with all the stuff they’ve written before the original bet?
I know Will Newsome in real life. If a means of arbitrating this bet is invented, I will identify AspiringKnitter as being him or not by visual or voice for a small cut of the stakes. (If it doesn’t involve using Skype, telephone, or an equivalent, and it’s not dreadfully inconvenient, I’ll do it for free.)
A sidetrack: People seem to be conflating AspiringKnitter’s identity as a Christian and a woman. Female is an important part of not being Will Newsome, but suppose that AspiringKnitter were a male Christian and not Will Newsome. Would that make a difference to any part of this discussion?
More identity issues: My name is Nancy Lebovitz with a v, not a w.
Sorry ’bout the spelling of your name, I wonder if I didn’t make the same mistake before …
Well, the biggest thing AK being a male non-Will Christian would change, is that he would lose an easy way to prove to a third party that he’s not Will Newsome and thus win a thousand bucks (though the important part is not exactly being female, it’s having a recognizably female voice on the phone, which is still pretty highly correlated).
Rationalist lesson that I’ve derived from the frequency that people get my name wrong: It’s typical for people to get it wrong even if I say it more than once, spell it for them, and show it to them in writing. I’m flattered if any of my friends start getting it right in less than a year.
Correct spelling and pronunciation of my name is a simple, well-defined, objective matter, and I’m in there advocating for it, though I cut people slack if they’re emotionally stressed.
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks. Less Wrong has a lot about cognitive biases, but not so much about perceptual biases.
This is a feature, not a bug. Natural language has lots of redundancy, and if we read one letter at a time rather than in word-sized chunks we would read much more slowly.
I think you have causality reversed here. It’s the redundancy of our languages that’s the “feature”—or, more precisely, the workaround for the previously existing hardware limitation. If our perceptual systems did less “filling in of blanks,” it seems likely that our languages would be less redundant—at least in certain ways.
I think redundancy was originally there to counteract noise, of which there was likely a lot more in the ancestral environment, and as a result there’s more-than-enough of it in such environments as reading text written in a decent typeface one foot away from your face, and the brain can then afford to use it to read much faster. (It’s not that hard to read at 600 words per minute with nearly complete understanding in good conditions, but if someone was able to speak that fast in a not-particularly-quiet environment, I doubt I’d be able to understand much.)
Yeah, I agree with that.
I said
I think it’s time to close out this somewhat underspecified offer of a bet. So far, AspiringKnitter and Eliezer expressed interest but only if a method of resolving the bet could be determined, Alicorn offered to play a role in resolving the bet in return for a share of the winnings, and dlthomas offered up $15.
I will leave the possibility of joining the bet open for another 24 hours, starting from the moment this comment is posted. I won’t look at the site during that time. Then I’ll return, see who (if anyone) still wants a piece of the action, and will also attempt to resolve any remaining conflicts about who gets to participate and on what terms. You are allowed to say “I want to join the bet, but this is conditional upon resolving such-and-such issue of procedure, arbitration, etc.” Those details can be sorted out later. This is just the last chance to shortlist yourself as a potential bettor.
I’ll be back in 24 hours.
And the winners are… dlthomas, who gets $15, and ITakeBets, who gets $100, for being bold enough to bet unconditionally. I accept their bets, I formally concede them, aaaand we’re done.
You know I followed your talk about betting but never once considered that I could win money for realz if I took you up on it. The difficulty of proving such things made the subject seem just abstract. Oops.
And thus concludes the funniest thread on LessWrong in a very long time. Thanks, folks.
Thank you.
What did they win money for?
Betting money. That is how such things work.
You’re such a dick. Haha. Upvoted.
You not being Will_Newsome. (I can’t imagine how bizarre it must be to be watching this conversation from your perspective.)
Wait, but what changed that caused Mitchell_Porter to realize that?
I didn’t exactly realize it, but I reduced the probability. My goal was never to make a bet, my goal was to sockblock Will. But in the end I found his protestations somewhat convincing; he actually sounded for a moment like someone earnestly defending himself, rather than like a joker. And I wasn’t in the mood to re-run my comparison between the Gospel of Will and the Knitter’s Apocryphon. So I tried to retire the bet in a fair way, since having an ostentatious unsubstantiated accusation of sockpuppetry in the air is almost as corrosive to community trust as it is to be beset by the real thing. (ETA: I posted this before I saw Kevin’s comment, by the way!)
“Next time just don’t be a dick and you won’t lose a hundred bucks,” says the unreflective part of my brain whose connotations I don’t necessarily endorse but who I think does have a legitimate point.
No idea. Don’t have to show your cards if you fold...
I think he just gave up and didn’t want to be the guy sowing seeds of discontent with no evidence. That kind of thing is bad for communities.
Mitchell asked Will directly at http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jby so perhaps he just trusts Will not to lie when using the Will_Newsome account.
I’ll stake $500 if eligible.
When would the answer need to be known by?
I am interested.
Edit: Putting up $100, regardless of anyone else’s participation, and I’m prepared to demonstrate that I’m not Will_Newsome if that is somehow necessary.
I’ll stake $100 against you, if and only if Eliezer also participates.
(Replying rather than editing, to make sure that my comment displays as un-edited.)
I should also stipulate that I am not, nor have I ever been, Will Newsome.
It’s not impossible that I was once Will Newsome, I suppose, nor even that I currently am. But if so, I’m unaware of the fact.
I am a known magus, so even an Imperius curse is not out of the question.
Turns out LW is a Chesterton-esque farce in which all posters are secretly Wills trolling Wills.
Then I’m really wasting time here.
Yes, I all are!
Or you’ve been neglecting to treat your Spontaneous Duplication.
Unfortunately, I don’t have the spare money to take the other side of the bet, but Will showed a tendency to head off into foggy abstractions which I haven’t seen in Aspiring Knitter.
Will_Newsome does not seem, one would say, incompetent. I have never read a post by him in which he seemed to be unknowingly committing some faux pas. He should be perfectly capable of suppressing that particular aspect of his posting style.
And what do I have to do to win your bet, given that I’m not him (and hadn’t even heard of him before)? After all, even if you saw me in person, you could claim I was paid off by this guy to pretend to be AspiringKnitter. Or shall I just raise my right hand?
I don’t see why this guy wouldn’t offer such a bet, knowing he can always claim I’m lying if I try to provide proof. No downside, so it doesn’t matter how unlikely it is, he could accuse any given person of sockpuppeting. The expected return can’t be negative. That said, the odds here being worse than one in a million, I don’t know why he went to all that trouble for an expected return of less than a cent. There being no way I can prove who I am, I don’t know why I went to all the trouble of saying this, either, though, so maybe we’re all just a little irrational.
Let’s first confirm that you’re willing to pay up, if you are who I say you are. I will certainly pay up if I’m wrong…
That’s problematic since if I were Newsome, I wouldn’t agree. Hence, if AspiringKnitter is Will_Newsome, then AspiringKnitter won’t even agree to pay up.
Not actually being Will_Newsome, I’m having trouble considering what I would do in the case where I turned out to be him. But if I took your bet, I’d agree to it. I can’t see how such a bet could possibly get me anything, though, since I can’t see how I’d prove that I’m not him even though I’m really not him.
All right, how about this. If I presented evidence already in the public domain which made it extremely obvious that you are Will Newsome, would you pay up?
By the way, when I announced my belief about who you are, I didn’t have personal profit in mind. I was just expressing confidence in my reasoning.
There is no such evidence. What do you have in mind that would prove that?
You write stream-of-consciousness run-on sentences which exhibit abnormal disclosure of self while still actually making sense (if one can be bothered parsing them). Not only do you share this trait with Will, the themes and the phrasing are the same. You have a deep familiarity with LessWrong concerns and modes of thought, yet you also advocate Christian metaphysics and monogamy. Again, that’s Will.
That’s not yet “extremely obvious”, but it should certainly raise suspicions. I expect that a very strong case could be made by detailed textual comparison.
AspiringKnitter’s arguments for Christianity are quite different from Will’s, though.
(Also, at the risk of sounding harsh towards Will, she’s been considerably more coherent.)
I think if Will knew how to write this non-abstractly, he would have a valuable skill he does not presently possess, and he would use that skill more often.
By the time reflective and wannabe-moral people are done tying themselves up in knots, what they usually communicate is nothing; or, if they do communicate, you can hardly tell them apart from the people who truly can’t.
Point of curiosity: if you took the point above and rewrote it the way you think AspiringKnitter would say it, how would you say it?
(ETA: Something like this:)
What I’m saying is that most people who write a Less Wrong comment aren’t totally stressing out about all the tradeoffs that inevitably have to be made in order to say anything at all. There’s a famous quote whose gist is ‘I apologize that this letter is so long, but I didn’t have very much time to write it’. The audience has some large and unknown set of constraints on what they’re willing to glance at, read, take seriously, and so on, and the writer has to put a lot of work into meeting those constraints as effectively as possible. Some tradeoffs are easy to make: yes, a long paragraph is a self-contained stucture, but that’s less important than readibility. Others are a little harder: do I give a drawn-out concrete example of my point, or would that egregiously inflate the length of my comment?
There are also the author’s internal constraints re what they feel they need to say, what they’re willing to say, what they’re willing to say without thinking carefully about whether or not it’s a good idea to say, how much effort they can put into rewriting sentences or linking to relevant papers while their heart’s pumping as if the house is burning down, vague fears of vague consequences, and so on and so forth for as long as the author’s neuroticism or sense of morality allows.
People who are abnormally reflective soon run into meta-level constraints: what does it say about me that I stress out this much at the prospect of being discredited? By meeting these constraints am I supporting the proliferation of a norm that isn’t as good as it would be if I met some other, more psychologically feasible set of constraints? Obviously the pragmatic thing to do is to “just go with it”, but “just going with it” seems to have led to horrifying consequences in the past; why do I expect it to go differently this time?
In the end the author is bound to become self-defeating, dynamically inconsistent. They’ll like as not end up loathing their audience for inadvertently but non-apologetically putting them in such a stressful situation, then loathing themselves for loathing their audience when obviously it’s not the audience’s fault. The end result is a stressful situation where the audience wants to tell the author to do something very obvious, like not stress out about meeting all the constraints they think are important. Unfortunately if you’ve already tied yourself up in knots you don’t generally have a hand available with which to untie them.
ETA: On the positive side they’ll also build a mega-meta-FAI just to escape all these ridiculous double binds. “Ha ha ha, take that, audience! I gave you everything you wanted! Can’t complain now!”
And yet, your g-grandparent comment, about which EY was asking, was brief… which suggests that the process you describe here isn’t always dominant.
Although when asked a question about it, instead of either choosing or refusing to answer the question, you chose to back all the way up and articulate the constraints that underlie the comment.
Hm? I thought I’d answered the question. I.e. I rewrote my original comment roughly the way I’d expect AK to write it, except with my personal concerns about justification and such, which is what Eliezer had asked me to do, ’cuz he wanted more information about whether or not I was AK, so that he could make money off Mitchell Porter. I’m reasonably confident I thwarted his evil plans in that he still doesn’t know to what extent I actually cooperated with him. Eliezer probably knows I’d rather my friends make money off of Mitchell Porter, not Eliezer.
Oh! I completely missed that that was what you were doing… sorry. Thanks for clarifying.
You know, in some ways, that does sound like me, and in some ways it really still doesn’t. Let me first of all congratulate you on being able to alter your style so much. I envy that skill.
Your use of “totally” is not the same as my use of “totally”; I think it sounds stupid (personal preference), so if I said it, I would be likely to backspace and write something else. Other than that, I might say something similar.
I would have said ” that goes something like” instead of “whose gist is”, but that’s the sort of concept I might well have communicated in roughly the manner I would have communicated it.
An interesting point, and MUCH easier to understand than your original comment in your own style. This conveys the information more clearly.
This has become a run-on sentence. It started like something I would say, but by the end, the sentence is too run-on to be my style. I also don’t use the word “neuroticism”. It’s funny, but I just don’t. I also try to avoid the word “nostrils” for no good reason. In fact, I’m disturbed by having said it as an example of another word I don’t use.
However, this is a LOT closer to my style than your normal writing is. I’m impressed. You’re also much more coherent and interesting this way.
I would probably say “exceptionally” or something else other than “abnormally”. I don’t avoid it like “nostrils” or just fail to think of it like “neuroticism”, but I don’t really use that word much. Sometimes I do, but not very often.
Huh, that’s an interesting thought.
Certainly something I’ve considered. Sometimes in writing or speech, but also in other areas of my life.
I might have said this, except that I wouldn’t have said the first part because I don’t consider that obvious (or even necessarily true), and I would probably have said “horrific” rather than “horrifying”. I might even have said “bad” rather than either.
I would probably have said that “many authors become self-defeating” instead of phrasing it this way.
Two words I’ve never strung together in my life. This is pure Will. You’re good, but not quite perfect at impersonating me.
Huh, interesting. Not quite what I might have said.
...Why don’t they? Seriously, I dunno if people are usually aware of how uncomfortable they make others.
I’m afraid I don’t understand.
And I wouldn’t have said this because I don’t understand it.
Thank you, that was interesting. I should note that I wasn’t honestly trying to sound like you; there was a thousand bucks on the table so I went with some misdirection to make things more interesting. Hence “dynamically inconsistent” and “totally” and so on. I don’t think it had much effect on the bet though.
Have you looked into and/or attempted methods of lowering your anxiety?
Yes. Haven’t tried SSRIs yet. Really I just need a regular meditation practice, but there’s a chicken and egg problem of course. Or a prefrontal cortex and prefrontal cortex exercise problem. The solution is obviously “USE MOAR WILLPOWER” but I always forget that or something. Lately I’ve been thinking about simply not sinning, it’s way easier for me to not do things than do things. This tends to have lasting effects and unintended consequences of the sort that have gotten me this far, so I should keep doing it, right? More problems more meta.
IME, more willpower works really poorly as a solution to pretty much anything, for much the same reason that flying works really poorly as a way of getting to my roof. I mean, I suspect that if I could fly, getting to my roof would be very easy, but I can’t fly.
I also find that regular physical exercise and adequate sleep do more to manage my anxiety in the long term (that is, on a scale of months) than anything else I’ve tried.
Have you tried yoga or tai chi as meditation practices? They may be physically complex/challenging enough to distract you (some of the time) from verbally-driven distraction.
I suspect that “not sinning” isn’t simple. How would you define sinning?
Verbally-driven distraction isn’t much of an issue, it’s mostly just getting to the zafu. Once there then even 5 minutes of meditation is enough to calm me down for 30 minutes, which is a pretty big deal. I’m out of practice; I’m confident I can get back into the groove, but first I have actually make it to the zafu more than once every week or two. I think I want to stay with something that I already identify with really powerful positive experiences, i.e. jhana meditation. I may try contemplative prayer at some point for empiricism’s sake.
Re sinning… now that I think about it I’m not sure that I could do much less than I already do. I read a lot and think a lot, and reflectively endorse doing so, mostly. I’m currently writing a Less Wrong comment which is probably a sin, ‘cuz there’s lots of heathens ’round these parts among other reasons. Huh, I guess I’d never thought about demons influencing norms of discourse on a community website before, even though that’s one of the more obvious things to do. Anyway, yah, the positive sins are sorta simplistically killed off in their most obvious forms, except pride I suppose, while the negative ones are endless.
I gather that meditating at home is either too hard or doesn’t work as well?
?
I do meditate at home! “Zafu” means “cushion”. Yeah, I have trouble remembering to walk 10 feet to sit down in a comfortable position on a comfortable cusion instead of being stressed about stuff all day. Brains...
Not sure what the question mark is for. Heathens are bad, it’s probably bad to hang out with them, unless you’re a wannabe saint and are trying to convert them, which I am, but only half-heartedly. Sin is all about contamination, you know? Hence baptism and stuff. Brains...
You are not doing this in any way, shape, or form, unless I missed some post-length or sequence-length argument of yours. (And I don’t mean a “hint” as to what you might believe.) If you have something to say on the topic, you clearly can’t or won’t say it in a comment.
I have to tentatively classify your “trying” as broken signaling (though I notice some confusion on my part). If you were telling the truth about your usual mental state, and not deliberately misleading the reader in some odd way, you’ve likely been trying to signal that you need help.
Sorry, wait, maybe there’s some confusion? Did you interpret me saying “convert” as meaning “convert them to Christianity”? ’Cuz what I meant was convert people to the side of reason more generally, e.g. by occasionally posting totally-non-trolling comments about decision theory and stuff. I’m not a Christian. Or am I misinterpreting you?
I’m not at all trying to signal that I need help, if I seem to be signaling that then it’s an accidental byproduct of some other agenda which is SIGNIFICANTLY MORE MANLYYYY than crying for help.
Love the attitude. And for what it’s worth I didn’t infer any signalling of need for help.
Quick response: I saw that you don’t classify your views as Christianity. I do think you classify them as some form of theism, but I took the word “convert” to mean ‘persuade people of whatever the frak you want to say.’
Sorry for the misunderstanding about where you meditate—I’m all too familiar with distraction and habit interfering with valuable self-maintenance.
As for heathens, you’re from a background which is very different from mine. My upbringing was Jewish, but not religiously intense. My family lived in a majority Christian neighborhood.
I suppose it would have been possible to avoid non-Jews, but the social cost would have been very high, and in any case, it was just never considered as an option. To the best of my knowledge, I wasn’t around anyone who saw religious self-segregation as a value. At all. The subject never came up.
I hope I’m not straying into other-optimizing, but I feel compelled to point out that there’s more than one way of being Christian, and not all of them include avoiding socializing with non-Christians.
Ah, I’m not a Christian, and it’s not non-Christians that bother me so much as people who think they know something about how the world works despite, um, not actually knowing much of anything. Inadvertent trolls. My hometown friends are agnostic with one or two exceptions (a close friend of mine is a Catholic, she makes me so proud), my SingInst-related friends are mostly monotheists these days whether they’d admit to it or not I guess but definitely not Christians. I don’t think of for example you as a heathen; there are a lot of intelligent and thoughtful people on this site. I vaguely suspect that they’d fit in better in an intellectual Catholic monastic order, e.g. the Dominicans, but alas it’s hard to say. I’m really lucky to know a handful of thoughtful SingInst-related folk, otherwise I’d probably actually join the Dominicans just to have a somewhat sane peer group. Maybe. My expectations are probably way too high. I might try to convince the Roman Catholic Church to take FAI seriously soon; I actually expect that this will work. They’re so freakin’ reasonable, it’s amazing. Anyway I’m not sure but my point might be that I’m just trying to stay away from people with bad epistemic habits for fear of them contaminating me, like a fundamentalist Christian trying to keep his high epistemic standards amidst a bunch of lions and/or atheists. Better to just stay away from them for the most part. Except hanging out with lions is pretty awesome and saint-worthy whereas hanging out with atheists is just kinda annoying.
Is this meant to be ironic?
Half-ironic, yeah.
Then upvoted.
So why are you hanging out with them?
Because I’m sinful? And not all of them are heathens, I’m just prone to exaggeration. I think this new AspiringKnitter person is cool, for example; likelihood-ratio-she apparently can supernaturally tell good from bad, which might make my FAI project like a billion times easier, God willing. NancyLebovitz is cool. cousin it is cool. cousin it I can interact with on Facebook but not all of the cool LW people. People talk about me here, I feel compelled to say something for some reason, maybe ’cuz I feel guilty that they’re talking about me and might not realize that I realize that.
Please don’t consider this patronizing but… the writing style of this comment is really cute.
I think you broke whatever part of my brain evaluates people’s signalling. It just gave up and decided your writing is really cute. I really have no idea what impression to form of you; the experience was so unusual that I felt I had to comment.
Thanks to your priming now I can’t see “AspiringKnitter” without mentally replacing it with “AspiringKittens” and a mental image of a Less Wrong meetup of kittens who sincerely want to have better epistemic practices. Way to make the world a better place.
That’s what the SF Less Wrong meetups are missing: Kittens.
Just make sure you don’t have anyone with bad allergies...
Independently of you, I PM’d her the exact same thing. Well, guess I’m in good company.
Are you AspiringKnitter, or the author of AspiringKnitter?
Not as far as I know, but you seemed pretty confident in that hypothesis so maybe you know something I don’t.
I think I only ever made one argument for Christianity? It was hilarious, everyone was all like WTF!??! and I was like TROLOLOLOL. I wonder if Catholics know that trolling is good, I hear that Zen folk do. Anyway it was naturally a soteriological argument which I intended to be identical to the standard “moral transformation” argument which for naturalists (metaphysiskeptics?) is the easiest of the theories to swallow. If I was expounding my actual thoughts on the matter they would be significantly more sophisticated and subtle and would involve this really interesting part where I talk about “Whose Line Is It Anyway?” and how Jesus is basically like Colin Mochrie specifically during the ‘make stupid noises then we make fun of you for sucking but that redeems the stupid noises’ part. I’m talking about something brilliant that doesn’t exist I’m like Borges LOL!
Local coherence is the hobgoblin of miniscule minds; global coherence is next to godliness.
(ETA: In case anyone can’t tell, I just discovered Dinosaur Comics and, naturally, read through half the archives in one sitting.)
Downvoted, by the way. I want to signal my distaste for being confused for you. Are you using some form of mind-altering substance or are you normally like this? I think you need to take a few steps back. And breathe. And then study how to communicate more clearly, because I think either you’re having trouble communicating or I’m having trouble understanding you.
I’m not quite in a mood to downvote, but I think you were wildly underestimating how hard it would be for Will to change what he’s doing.
It would probably require the community stopping feeding the ugly little lump.
Also,
Will is good-looking, normal-sized, and not at all lumpy. If you must insult people, can you do it in a less wrong way?
I’m referring to his being an admitted troll.
To be fair Will is more the big and rocky kind of troll. You can even see variability that can only be explained by drastic temperature changes!
That works.
We don’t approve of that kind of language used against anyone considered to be of our in-group, no matter how weird they might act. Please delete this.
Do you normally refer to yourselves as ‘we’? I never noticed that before. (Witty, though.)
Nope, I’m simply being confident that the vast majority of the LW community stands with me here.
(Well, in a sense, it is the Less Wrong Hivemind speaking through me here, so yes, It refers to Itself as “we”.)
Ah. In that case, I have to ask how you explain the vote totals?
That is, I would expect a comment of which the Hivemind strongly disapproves to accumulate a negative score over a month-plus.
Edit: Uh, not sure what the downvote’s for...? I mean no offence.
Vote totals don’t mean what you think they mean.
This is actually a good point! I stand corrected.
That’s what I’d expect, as well, though I wish it weren’t so. I usually try to make the effort to upvote or downvote comments based on how informative, well-written, and well-reasoned they are, not whether I agree with them or not (with the exception of poll-style comments). Of course, just because I try to do this, doesn’t mean that I succeed...
Most people often just don’t notice a comment deep in some thread. But if their attention was drawn to it, I say they’d react this way.
For what it’s worth, I agree. Will’s kind of awesome, in a weird way. (Though my first reaction was “Wait, just our in-group? That’s groupist!”) But I’m not nearly as confident in my model of what others approve or disapprove of.
On second thought maybe I am in a sense; my cortisol (?) levels have been ridiculously high ever since I learned that people have been talking about me here on LW. For about a day before that I’d been rather abnormally happy—my default state matches the negative symptoms of schizophrenia as you’d expect of a prodrome, and “happiness” as such is not an emotion I experience very much at all—which I think combined with the unexpected stressor caused my body to go into freak-out-completely mode, where it remains and probably will remain until I spend time with a close friend. Even so I don’t think this has had as much an effect on my writing style as reading a thousand Dinosaur Comics has.
Have you sought professional help in the past? If not, do nothing else until you take some concrete step in that direction. This is an order from your decision theory.
Yes, including from the nice but not particularly insightful folk at UCSF, but negative symptoms generally don’t go away, ever. My brain is pretty messed up. Jhana meditation is wonderful and helps when I can get myself to do it. Technically if I did 60mg of Adderall and stayed up for about 30 to 45 hours then crashed, then repeated the process forever, I think that would overall increase my quality of life, but I’m not particularly confident of that, especially as the outside view says that’s a horrible idea. In my experience it ups the variance which is generally a good thing. Theoretically I could take a bunch of nitrous oxide near the end of the day so as to stay up for only about 24 hours as opposed to 35 before crashing; I’m not sure if I should be thinking “well hell, my dopaminergic system is totally screwed anyway” or “I should preserve what precious little automatic dopaminergic regulation I have left”. In general nobody knows nothin’ ‘bout nothin’, so my stopgap solution is moar meditation and moar meta.
Have you tried doing a detailed analysis of what would make it easier for you to meditate, and then experimenting to find whether you’ve found anything which would actually make it easier? Is keeping your cushion closer to where you usually are a possibility?
Not particularly detailed. It’s hard to do better than convincing my girlfriend to bug me about it a few times a day, which she’s getting better at. I think it’s a gradual process and I’m making progress. I’m sure Eliezer’s problems are quite similar, I suppose I could ask him what self-manipulation tactics he uses besides watching Courage Wolf YouTube videos.
I suspect it would, at least in some ways. I’m mentally maybe not too dissimilar, and have done a few months of polyphasic sleeping, supported by caffeine (which I’m way too sensitive to). My mental abilities were pretty much crap, and damn was I agitated, but I was overall happier, baseline at least.
I do recommend 4+ days of sleep deprivation and desperately trying to figure out how an elevator in HL2 works as a short-term treatment for can’t-think-or-talk-but-bored, though.
No and no. I’m only like this on Less Wrong. Trust me, I know it doesn’t seem like it, but I’ve thought about this very carefully and thoroughly for a long time. It’s not that I’m having trouble communicating; it’s that I’m not trying to. Not anything on the object level at least. The contents of my comments are more like expressions of complexes of emotions about complex signaling equilibria. In response you may feel very, very compelled to ask: “If you’re not trying to communicate as such then why are you expending your and my effort writing out diatribes?” Trust me, I know it doesn’t seem like it, but I’ve thought about this very carefully and thoroughly for a long time. “I’m going to downvote you anyway; I want to discourage flagrant violations of reasonable social norms of communication.” As expected! I’m clearly not optimizing for karma. And my past selves managed to stock up like 5,000 karma anyway so I have a lot to burn. I understand exactly why you’re downvoting, I have complex intuitions about the moral evidence implicit in your vote, and in recompense I’ll try harder to “be perfect”.
So it is more just trolling.
Which, from the various comments Will has made along these lines we can roughly translate to “via incoherent abstract rationalizations Will_Newsome has not only convinced himself that embracing the crazy while on lesswrong is a good idea but that doing so is in fact a moral virtue”. Unfortunately this kind of conviction is highly resistant to persuasion. He is Doing the Right Thing. And he is doing the right thing from within a complex framework wherein not doing the right thing has potentially drastic (quasi-religious-level) consequences. All we can really do is keep the insane subset of his posts voted below the visibility threshold and apply the “don’t feed the troll” policy while he is in that mode.
Good phrase, I think I’ll steal it. Helps me quickly describe how seriously I take this whole justification thing.
ACBOD. ;P
HOW CAN ANYONE DOWNVOTE THAT IT WAS SO CLEVER LOL?
NO BUT SERIOUSLY GUYS IT WAS VERY CLEVER I SWITCHED THE C AND THE D SO AS TO MORE ACCURATELY DESCRIBE MY STATE OF MIND LOL?
One of my Facebook activities is “finding bits of Chaitin’s omega”! I am an interesting and complex person! I am nice to my girflriend and she makes good food like fresh pizza! Sometimes I work on FAI stuff, I’m not the best at it but I’m surprisingly okay! I found a way to hack the arithmetical hierarchy using ambient control, it’s really neat, when I tell people about it they go like “WTF that is a really neat idea Will!”! If you’re nice to me maybe I’ll tell you someday? You never know, life is full or surprises allegedly!
Greetings, Will_Newsome.
This particular post of yours was, last night, at 4 upvotes. Do you have any hypothesis as to why that was the case? I am rather curious as to how that happened.
An instance of the more general phenomenon. If I recall the grandparent in particular was at about −3 then overnight (wedrifid time) went up to +5 and now seems to be back at −4. Will’s other comments from the time period all experienced a fluctuation of about the same degree. I infer that the fickle bulk upvotes and downvotes are from the same accounts and with somewhat less confidence that they are from the same user.
Or, you know, memories.
It’s possible that the aesthetic only appeals to voters in certain parts of the globe.
Are you saying there is a whole country which supports internet trolls? Forget WMDs, the next war needs to be on the real threat to (the convenience of) civilization!
If I told you that God likes to troll people would that raise your opinion of trolls or lower your opinion of GOD DAMMIT I can’t take it anymore, why does English treat “or” as “xor”? We have “either x or y” for that. Now I have to say “and/or” which looks and is stupid. I refuse.
The general impression of the Book of Job seems to be to lower people’s opinion of God rather than raise their opinion of trolling.
And it was an atheist philosopher who first called trolling a art.
I DID NOT KNOW THAT THANK YOU. Not only is Schopenhauer responsible for Borges, he is a promoter of trolling… this is amazing.
I hear that Zen people have been doing it for like 1,000 years, but maybe they didn’t think of it as an art as such.
If you like it than you should have put an upvote on it.
Now I have. And on that comment too. All the single comments.
Which God? If it is Yahweh then that guy’s kind of a dick and I don’t value his opinion much at all. But he isn’t enough of a dick that I can reverse stupidity to arrive at anything useful either.
/nods, makes sense.
Neither, really. There are trickster figures all over the place in mythology; it’d take a fairly impressive argument to get me to believe that YHWH is one of them, but assuming such an argument I don’t think it’d imply many updates that “Coyote likes trolling people” (a nearly tautological statement) wouldn’t.
Hm? Even if YHWH existed and was really powerful, you still wouldn’t update much if you found out He likes to troll people? Or does your comment only apply if YHWH is a fiction?
You could say, “x or y or both” in place of “x and/or y”. I’m not sure if that looks more or less stupid.
I’ll try it out at some point at least, thanks for the suggestion.
If the Bible is the world’s longest-running Rickroll, does that count?
What’s the hypothesis, that the Bible was subtly optimized to bring about Rick Astley and Rickrolling 1,500 or so years later? That… that does seem like His style… I mean obviously the Bible would be optimized to do all kinds of things, but that might be one of the subgoals, you never know.
Aw, wedrifid, that’s mean. :( I was asleep during that time. There’s probably some evidence of that on my Facebook page, i.e. no activity until about like 5 hours ago when I woke up. Also you should know that I’m not so incredibly lame/retarded as to artificially inflate a bunch of comments’ votes for basically no reason other than to provoke accusations that I had done so.
Is it? I didn’t think it was something that you would be offended by. Since the mass voting was up but then back down to where it started it isn’t a misdemeanor so much as it is peculiar and confusing. The only possibility that sprung to mind was that it could be an extension of of your empirical experimentation. You (said that you) actually made a bunch of the comments specifically so that they would get downvotes so that you could see how that influenced the voting behavior of others. Tinkering with said votes to satisfy a further indecipherable curiosity doesn’t seem like all that much of a stretch.
No, not really at all, I was just playing around. I don’t really get offended; I get the impression that you don’t either. And yeah upon reflection your hypothesis was reasonable, I probably only thought it was absurd ‘cuz I have insider knowledge. (ETA: Reasoning about counterfactual states of knowledge is really hard; not only practically speaking ’cuz brains aren’t meant to do that, but theoretically too, which is why people get really confused about anthropics. The latter point deserves a post I mean Facebook status update at some point.)
That’s true. It’s tricky enough that Eliezer seems to get confused about it (or at least I thought he was confusing himself back when he wrote a post or two on the subject.)
That actually sounds like a lot of fun, if followed up with a specific denial of having done that.
I guess that sounds fun? Or why do you think it sounds fun? I think it’d only be worth if if the thread was really public, like when that Givewell dude made that one post about naive EU maximization and charity.
Why does that sound fun? I don’t know. I do know that when I am less-than-lucid, I am liable to lead individuals on conversational wild-goose chases. Within these conversations, I will use a variety of tactics to draw the other partner deeper into the conversation. No tactic in particular is fun, except in-so-far as it confuses the other person. Of course, when I am of sound mind, I do not find this game to be terribly fun.
I assume that you play similar games on Lesswrong. Purposely upvoting one’s own comments in an obvious way, followed by then denying that one did it, seems like a good way to confuse and frustrate other people. I know that if the thought occurred to me when I was less-than-lucid, and if I were the sort of person to play such games on Lesswrong, I probably would try the tactic out.
This seems more likely than you having a cadre of silent, but upvoting, admirers.
Both seem unlikely. I’m still confused. I think God likes trolling, maybe He did it? Not sure what mechanism He’d use though so it’s not a particularly good explanation.
Oh. That is certainly a possibility I failed to initially consider. Thank you for pointing this out.
Wedrifid said that too. I don’t have a model that predicts that. I think that most of the time my comments get upvoted to somewhere between 1 and 5 and then drop off as people who aren’t Less Wrong regulars read through; that the reverse would happen for a few hours at least is odd. It’s possible that the not-particularly-intelligent people who normally downvote my posts when they’re insightful also tend to upvote my posts when they’re “worthless”. ETA: thomblake’s hypothesis about regional differences in aesthetics seems more plausible than mine.
I think you severely underestimate the value of trolling.
Erm. I can’t say that this raises my confidence much. I am reminded of the John McCarthy quote, “Your denial of the importance of objectivity amounts to announcing your intention to lie to us. No-one should believe anything you say.”
I feel responsible for the current wave of gibberish-spam from Will, and I regret that. If it were up to me, I would present him with an ultimatum—either he should promise not to sockpuppet here ever again, and he’d better make it convincing, or else every one of his accounts that can be identified will be banned. The corrosive effect of not knowing whether a new identity is a real person or just Will again, whether he’s “conducting experiments” by secretly mass-upvoting his own comments, etc., to my mind far outweighs the value of his comments.
I freely admit that I have one sockpuppet, who has made less than five comments and has over 20 karma. I do not think that having one sockpuppet for anonymity’s sake is against community norms.
ETA: I mean one sock puppet besides Mitchell Porter obviously.
I have a private message, dated 7 October, from an account with “less than five comments and [...] over 20 karma”, which begins, “I’m Will_Newsome, this is one of my alts.” (Emphasis mine.)
Will, I’m sorry it’s turning out like this. I am not perfect myself; anyone who cares may look up users “Bananarama” and “OperationPaperclip” and see my own lame anonymous humor. More to the point, I do actually believe that you want to “keep the stars from burning down”, and you’re not just a troll out to waste everyone’s time. The way I see it, because you have neither a job to tie you down, nor genuine intellectual peers and collaborators, it’s easy to end up seeking the way forward via elaborate crazy schemes, hatched and pursued in solitude; and I suspect that I got in the way of one such scheme, by asserting that AK is you.
I have those! E.g. I spend a lot of time with Steve, who is the most rational person in the entire universe, and I hang out with folk like Nick Tarleton and Michael Vassar and stuff. All those 3 people are way smarter than me, though arguably I get around some of that by way of playing to my strengths. The point is that I can play intellectualism with them, especially Steve who’s really good at understanding me. ETA: I also talk to the Black Belt Bayesian himself sorta often.
With no offense intended to Steve, no, he isn’t.
If you know any rationalists that are better than Steve then please, please introduce me to them.
How about most rational person I know of?
Ahhhh, okay, I see why you’d feel bad now I guess? Admittedly I wouldn’t have started commenting recently unless there’d been the confusion of me and AK, but AK isn’t me and my returning was just ’cuz I freaked out that people on LW were talking about me and I didn’t know why. Really I don’t think you’re to blame at all. And thinking AK is me does seem like a pretty reasonable hypothesis. It’s a false hypothesis but not obviously so.
I was only counting alts I’d used in the last few months. I remember having made two alts, but the first one, User:Arbitrarity, I gave up on (I think I’d forgotten about it) which is when I switched to the alt that I used to message you with (apparently I’d remembered it by then, though I wasn’t using it; I just like the word “arbitrarity”).
ETA: Also note that the one substantive comment I made from Arbitrarity has obvious reasons for being kept anonymous.
Anyway I can’t see any plausible reason why you should feel responsible for my current wave of gibberish-spam. [ETA: I mean except for the gibberish-spam I’m writing as a response to your comment; you should maybe feel responsible for that.] My autobiographical memory is admittedly pretty horrible but still.
Why do you feel responsible? That’s really confusing.
Okay I admit it, Mitchell Porter is one of my many sockpuppets. Please ban Mitchell Porter unless he can prove he’s not one of my many sockpuppets.
I don’t follow; your confidence in the value of trolling or your confidence in the general worthwhileness of fairly reading or charitably interpreting my contributions to Less Wrong? ’Cuz I’d given up on the latter a long time ago, but I don’t want your poor impression of me to falsely color your views on the value of trolling.
It seems obviously the latter, and I find it equally informative.
Eliezer please ban Mitchell Porter, he’s one of my sock puppets and I feel really guilty about it. Yeah I know you’ve known the real Mitchell Porter for like a decade now but I hacked into his account or maybe I bought it from him or something and now it’s just another of my sock puppets, so you know, ban the hell out of him please? It’s only fair. Thx bro!
It’s not often that I laugh out loud and downvote the same comment! ;)
Thanks! Um do you know any easy way to provide a lot of evidence that I have only one sockpuppet? I’m mildly afraid that Eliezer is going to take Mitchell Porter’s heinous allegations seriously as part of a secret conspiracy is that redundant? fuck. anyway secret conspiracy to discredit me. I am the only one who should be allowed to discredit me!
Ask a moderator (or whatever it takes to have access to IP logs) to check to see if there are multiple suspicious accounts from your most common IP. That’s even better than asking you to raise your right hand if you are not lying. It at least shows that you have enough respect for the community to at least try to hide it when you are defecting! :P
I’m confused. What happened overnight that made people suddenly start appreciating Will’s advocacy of his own trolling here and the surrounding context? −5 to +7 is a big change and there have been similar changes to related comments. Either someone is sockpuppeting or people are actually starting to appreciate this crap. (I’m really hoping the former!)
Edit: And now it is back to −3. How bizarre!
I’ve been appreciating it all along. I would not be terribly surprised if there were a dozen or so other people who do.
Do you specifically appreciate the advocacy of trolling comments that are the context or are you just saying that you appreciate Will’s actual contributions such as they are?
I appreciate Will’s contributions in general. Mostly the insane ones.
They remind me of a friend of mine who is absolutely brilliant but has lived his whole life with severe damage to vital parts of the brain.
I often appreciate his contributions as well. He is generally awful at constraining his abstract creativity so as to formulate constructive, concrete ideas but I can constrain abstract creativity just fine so his posts often provoke insights—the rest just bumps up against my nonsense filter. Reading him at his best is a bit like taking a small dose of a hallucinogenic to provide my brain with a dose of raw material to hack away at with logic.
Folks like you might wanna friend me on Facebook, I’m generally a lot more insightful and comprehensible there. I use Facebook like Steven Kaas uses Twitter. https://www.facebook.com/autothexis
Re your other comment re mechanisms for psi, I can’t muster up the energy to reply unfortunately. I’d have to be too careful about keeping levels of organization distinct, which is really easy to do in my head but really hard to write about. I might respond later.
That’s interesting. Which parts of the brain, if you don’t mind sharing? (Guess: qbefbyngreny cersebagny pbegrk, ohg abg irel pbasvqrag bs gung.)
I believe that is spot on, but I can’t recall specifics. Certainly in the neighborhood.
I enjoy following Will’s contributions on facebook (and here when he isn’t being willfully obnoxious). They remind me of, well, myself only worse.
I agree completely.
Did I say 5 years? Whoops...
Regarding sockpuppeting, that would suck. Can’t someone take a look at the database and figure out if many votes came from the same IP? Even better, when there are cases of weird voting behavior someone should check if the votes came from dummy accounts by looking at the karma score and recent submissions and see if they are close to zero karma and if their recent submissions are similar in style and diction etc.
And I suspect you incorrectly classify some of your contributions, placing them into a different subcategory within “willful defiance of the community preference” than where they belong. Unfortunately this means that the subset of your thoughts that are creative, deep and informed rather than just incoherent and flawed tend to be wasted.
My creative, deep, and informed thoughts are a superset of my thoughts in general not a subset wedrifid. Also I do not have any incoherent or flawed thoughts as should be obvious from the previous sentence but I realize that category theory is a difficult subject for many people.
ETA: Okay good, it took awhile for this to get downvoted and I was starting to get even more worried about the local sanity waterline.
I suspect that the reason for this is that the comment tree of which your post was a branch of is hidden by default, as it originates from a comment with less than −3 karma.
Um, on another note, could you just be less mean? ‘Mean’ seems to be the most accurate descriptor for posting trash that people have to downvote to stay hidden, after all.
No, I ran an actual test by posting messages in all caps to use as a control. Empiricism is so cool! (ETA: I also wrote a perfectly reasonable but mildly complex comment as a second control, which garnered the same number of downvotes as my insane set theory comment in about he same length of time.)
Re meanness, I will consider your request Dorikka. I will consider it.
Nope.
THANKS FOR TELLIN ME BRAH
The problem I have is that you claim to be “not optimising for karma”, but you appear to be “optimising for negative karma”. For example, the parent comment. There are two parts to it; acknowledgement of my comment, and a style that garners downvotes. The second part—why? It doesn’t fit into any other goal structure I can think of; it really only makes sense if you’re explicitly trying to get downvoted.
One of my optimization criteria is discreditable-ness which I guess is sort of like optimizing for downvotes insofar as my audience really cares about credibility. When it comes to motivational dynamics there tends to be a lot of crossing between meta-levels and it’s hard to tell what models are actually very good predictors. You can approximately model the comment you replied to by saying I was optimizing for downvotes, but that model wouldn’t remain accurate if e.g. suddenly Less Wrong suddenly started accepting 4chan-speak. That’s obviously unlikely but the point is that a surface-level model like that doesn’t much help you understand why I say what I say. Not that you should want to understand that.
Newsome FTW!
I’m confused. Have you sockpuppeted before?
I think I might understand what you’re saying here, in which case I see… sort of. I think I see what you’re doing but not why you’re doing it. Oh, well. Thank you for the explanation, that makes more sense.
Yes, barely, but I meant “past selves” in the usual Buddhist sense, i.e. I wrote some well-received posts under this account in the past. You might like the irrationality game, I made it for people like you.
On another note I’m sorry that my taste for discreditability has contaminated you by association; a year or so ago I foresaw that such an event would happen and deemed it a necessary tradeoff but naturally I still feel bad about it. I’m also not entirely sure I made the correct tradeoff; morality is hard. I wish I had synderesis.
Well, you’re half right.
Not telling which half.
You’re right.
Wow, is that all of your information? You either have a lot of money to blow, or you’re holding back.
“Deep familiarity with LessWrong concerns and modes of thought” can be explained by her having lurked a lot, and the rest of those features are not rare IME (even though they are under-represented on LW).
.
I put some text from recent comments by both AspiringKnitter and Will_Newsome into I write like; it suggested that AspiringKnitter writes “like” Arthur Clarke (2001: A Space Odyssey and other books) while Will_Newsome writes “like” Vladimir Nabokov (Lolita and other books). I’ve never read either, but it does look like a convenient textual comparison doesn’t trivially point to them being the same.
Also, if AspiringKnitter is a sockpuppet, it’s at least an interesting one.
When I put your first paragraph in that confabulator, it says “Vladimir Nabokov”. If I remove the words “Vladimir Nabokov (Lolita and other books)” from the paragraph, it says “H.P. Lovecraft”. It doesn’t seem to cut possible texts into clusters well enough.
I just got H.P. Lovecraft, Dan Brown, and Edgar Allan Poe for three different comments. I am somewhat curious as to whether this page clusters better than random assignment.
ETA: @#%#! I just got Dan Brown again, this time for the last post I wrote. This site is insulting me!
Apparently I write like Stephenie Meyer. And you feel insulted?
Looks like you are right. Two of my (larger, to give the algorithm more to work with) texts from other sources gave Cory Doctorow (technical piece) and again Lovecraft (a Hacker News comment about drug dogs?)
Sorry, and thanks for the correction.
There is evidence.
He can look like a moron or jerk, though, and there is even less risk for you in accepting it: he can necessarily only demand the $1000 from Will_Newsome.
You’re clearly out of touch with the populace. :) I’m only willing to risk 10% of my probability mass on your prediction.
That’s really odd. If there were some way to settle the bet I’d take it.
For what it’s worth, I thought Mitchell’s hypothesis seemed crazy at first, then looked through user:AspiringKnitter’s comment history and read a number of things that made me update substantially toward it. (Though I found nothing that made it “extremely obvious”, and it’s hard to weigh this sort of evidence against low priors.)
Out of curiosity, what’s your estimate of the likelihood that you’d update substantially toward a similar hypothesis involving other LW users? …involving other users who have identified as theists or partial theists?
It used to be possible—perhaps it still is? - to make donations to SIAI targeted towards particular proposed research projects. If you are interested in taking up this bet, we should do a side deal whereby, if I win, your $1000 would go to me via SIAI in support of some project that is of mutual interest.
Here is an experiment that could solve this.
If someone takes the bet and some of the proceeds go to trike, they might agree to check the logs and compare IPs (a matching IP or even a proxy as a detection avoidance attempt could be interpreted as AK=WN). Of course, AK would have to consent.
.
I’m still surprised that our collective ingenuity has yet to find a practical solution. I don’t think anybody is trying very hard but it’s still surprising how little our knowledge of cryptography and such is helping us.
Anyway yeah, I really don’t think IPs provide much evidence. As wedrifid said if the IPs don’t match it only means that at least I’m putting a minimal amount of effort into anonymity.
Why didn’t you suggest asking Will_Newsome?
DIdn’t think about it. He would have to consent, too. Fortunately, any interest in the issue seems to have waned.
Ask him what? To raise his right arm if he is telling the truth?
I missed where he explicitly made a claim about it one way or the other.
--A Wizard of Earthsea Ursula K. LeGuin
http://tvtropes.org/pmwiki/pmwiki.php/Main/YouDidntAsk
If he is AK then he made an explicit claim about it. So either he is not AK or he is lying—a raise your right hand situation.
I simply had not considered the logical implications of AspiringKnitter making the claim that she is not Will_Newsome, and had only noticed that no similar claim had appeared under the name of Will_Newsome.
It would be interesting if one claimed to be them both and the other claimed to be separate people. If Will_Newsome claimed to be both of them and AspiringKnitter did not, then we would know he was lying. So that is something possible to learn from asking Will_Newsome explicitly. I hadn’t considered this when I made my original comment, which was made without thinking deeply.
Um? Supposing I’d created both accounts, I could certainly claim as Will that both accounts were me, and claim as AK that they weren’t, and in that case Will would be telling the truth.
Me too.
ETA: And I really mean no offense, but I’m sort of surprised that folk don’t immediately see things like this… is it a skill maybe?
Wason selection taskish skill, methinks—so a rare one.
But if Will is AK, then Will claimed both that they were and were not the same person (using different screen names).
(Maybe everyone knows this but I’ve pretty much denied that me and AK are the same person. Just saying so people don’t get confused.)
Yes, a good thing to clarify! I’m only speaking to a hypothetical situation.
Oh, so by “Will” you mean “any account controlled by Will” not “the account called Will_Newsome”.
I think everyone else interpreted it as the latter.
(I’m sort of surprised that folk don’t immediately see things like this… is it a skill maybe?)
Nick, it was pretty obvious to me that lessdazed and CuSithBell meant the person Will, not “any account controlled by Will” or “the account called Will_Newsome”—it doesn’t matter if the person would be using an account in order to lie, or an email in order to lie, or Morse code in order to lie, just that they would be lying.
It was “obvious” to me that lessdazed didn’t mean that and it would’ve been obvious to me that CuSithBell did mean that if I hadn’t been primed to interpret his/her comment in the light of lessdazed’s comment. Looking back I’m still not sure what lessdazed intended, but at this point I’m starting to think he/she meant the same as CuSithBell but unfortunately put an underscore betwen “Will” and “Newsome”, confusing the matter.
Well, this was my first post in the thread. I assume you are referring to this post by lessdazed? I thought at the time of my post that lessdazed was using it in the former way (though I’d phrase it “the person Will Newsome”), as you say—either Will lied with the Will account, or told the truth with the Will account and was thus AK, and thus lying with the AK account.
I now think it’s possible that they meant to make neither assumption, instead claiming that if the accounts were inconsistent in this way (if the Will account could not “control” the AK account) then this would indicate that Will (the account and person) was lying about being AK. This claim fails if Will can be expected to engage in deliberate trickery (perhaps inspired by lessdazed’s post), which I think should be a fairly uncontentious assertion.
Yes, that’s true.
And?
And therefore, either one way or another, Will would be lying.
(Maybe I should point out that this is all academic since at this point both AK and I have denied that we’re the same person, though I’ve been a little bit more coy about it.)
And then he (the person) is lying (also telling the truth, naturally, but I interpreted your claim that he would be telling the truth as a claim that he would not be lying).
I suss out the confusion in this post.
Ah! The person (whatever his or her name was) would be lying, although the Will Newsome the identity would not be. I get it now.
Edit: And then I was utterly redundant. Sorry twice.
Absolutely not a problem :) I think I got turned around a few times there myself.
This was my initial interpretation as well, but on reflection I think lessdazed meant “ask him if it’s okay if his IP is checked.” Although that puts us in a strange situation in that he’s then able to sabotage the credibility of another member through refusal, but if we don’t require his permission we are perhaps violating his privacy...
Briefly, my impulse was “but how much privacy is lost in demonstrating A is (probably—proxies, etc) not a sock puppet of B”? If there’s no other information leaked, I see no reason to protect against a result of “BAD/NOTBAD” on privacy grounds. However, that is not what we are asking—we’re asking if two posters come from the same IP address. So really, we need to decide whether posters cohabiting should be able to keep that cohabitation private—which seems far more weighty a question.
I probably phrased it wrong. AK does not have to consent, but I would be surprised if the site admins would bother getting in the middle of this silly debate unless both parties ask for it and provide some incentive to do so.
Yes, it may be legal to check people’s IP addresses, but that doesn’t mean it’s morally okay to do so without asking; and if one does check, it’s best to do so privately (i.e. not publicize any identifying information, only the information “yup, it’s the same IP as another user”).
No, but it still is morally ok. In fact it is usually the use of multiple accounts that is frowned upon, morally questionable or an outright breach of ToS—not the identification thereof.
I don’t think sock puppets are always frowned down upon—if Clippy and QuirinusQuirrel were sock puppets of regular users (I think Quirrell is, but not Clippy), they are “good faith” ones (as long as they don’t double downvote etc.), and I expect “outing” them would be frowned upon.
If AK is a sock puppet, then yeah, it’s something morally questionable the admins should deal with. But I wouldn’t extend that to all sock puppets.
Quirrell overtly claims to be a sock puppet or something like one (it’s kind of complicated), whereas Clippy has been consistent in its claim to be the online avatar of a paperclip-maximizing AI. That said, I think most people here believe (like good Bayesians) that Clippy is more likely to be a sockpuppet of an existing user.
Huh. Can you clarify what is morally questionable about another user posting pseudonymously under the AK account?
For example, suppose hypothetically that I was the user who’d created, and was posting as, AK, and suppose I don’t consider myself to have violated any moral constraints in so doing. What am I missing?
Having multiple sock puppets can be a dishonest way to give the impression that certain views are held by more members than in reality. This isn’t really a problem for novelty sockpuppets (Clippy and Quirrel), since those clearly indicate their status.
What’s also iffy in this case is the possibility of AK lying about who she claims to be, and wasting everybody’s time (which is likely to go hand-in-hand with AK being a sockpuppet of someone else).
If you are posting as AK and are actually female and Christian but would rather that fact not be known about your more famous “TheOtherDave” identity, then I don’t have any objection (as long as you don’t double vote, or show up twice in the same thread to support the same position, etc.).
OK, thanks for clarifying.
I can see where double-voting is a problem, both for official votes (e.g., karma-counts) and unofficial ones (e.g., discussions on controversial issues).
I can also see where people lying about their actual demographics, experiences, etc. can be problematic, though of course that’s not limited to sockpuppetry. That is, I might actually be female and Christian, or seventeen and Muslim, or Canadian and Theosophist, or what-have-you, and still only have one account.
Hmm. I am generally a strong supporter of anonymity and pseudonymity. I think we just have to accept that multiple internet folks may come from the same meatspace body. You are right that sockpuppets made for rhetorical purposes are morally questionable, but that’s mostly because rhetoric itself is morally questionable.
My preferred approach is to pretend that names, numbers, and reputations don’t matter. Judge only the work, and not the name attached to it or how many comments claim to like it. Of course this is difficult, like the rest of rationality; we do tend to fail on these by default, but that part is our own problem.
Sockpuppetry and astroturfing is pretty clearly a problem, and being rational is not a complete defense. I’m going to have to think about this problem more, and maybe make a post.
Clippy is too.
Weren’t you just telling me that it is morally wrong for the admins to even look at the IP addresses?
When it comes to well behaved sockpuppetts “Don’t ask, don’t tell” seems to work.
I’ll bet US$10 you have significant outside information.
He doesn’t.
See, I’d like to believe you, but a thousand dollars is a lot of money.
Take him up on his bet, then.
(Not that I have any intention of showing up anywhere just to show you who I am and am not. Unless you’re going to pay ME that $1000.)
What about if I bet you $500 that you’re not WillNewsome? That way you can prove your separate existence to me, get paid, and I can use the proof you give me to take a thousand from MitchellPorter. In fact, I’ll go as high as 700 dollars if you agree to prove yourself to me and MitchellPorter.
Of course, this offer is isomorphic to you taking Mitchell’s bet and sending 300-500 dollars to me for no reason, and you’re not taking his bet currently, so I don’t expect you to be convinced by this offering either.
What possible proof could I offer you? I can’t take you up on the bet because, while I’m not Newsome, I can’t think of anything I could do that he couldn’t fake if this were a sockpuppet account. If we met in person, I could be the very same person as Newsome anyway; he could really secretly be a she. Or the person you meet could be paid by Newsome to pretend to be AspiringKnitter.
Well, I don’t know what proof you could offer me; but if we genuinely put 500 dollars either way on the line, I am certain we’d rapidly agree on a standard of proof that satisfied us both.
Nope, plenty of people onsite have met Will. I mean, I suppose it is not strictly impossible, but I would be surprised if he were able to present that convincingly as a dude and then later present as convincingly as a girl. Bonus points if you have long hair.
Excellent question. One way to deal with it is for all the relevant agents to agree on a bet that’s actually specified… that is, instead of betting that “AspiringKnitter is/isn’t the same person as WillNewsome,” bet that “two verifiably different people will present themselves to a trusted third party identifying as WillNewsome and AspiringKnitter” and agree on a mechanism of verifying their difference (e.g., Skype).
You’re of course right that these are two different questions, and the latter doesn’t prove the former, but if y’all agree to bet on the latter then the former becomes irrelevant. It would be silly of anyone to agree to the latter if their goal was to establish the former, but my guess is that isn’t actually the goal of anyone involved.
Just in case this matters, I don’t actually care. For all I know, you and shokwave are the same person; it really doesn’t affect my life in any way. This is the Internet, if I’m not willing to take people’s personas at face value, then I do best not to engage with them at all.
Why would he do that? He’d lose!
Yeah, you take the bet. Free money! Show up on Skype.
And get accused of being this person’s sister impersonating his sockpuppet?
As far as we know.
I’ll take up to $15 of that, at even odds. Possibly more, if the odds can be skewed in my favor.
I have a general heuristic that making one on one bets is not worthwhile as a way to gain money, as the other party’s willingness to bet indicates they don’t expect to lose money to me. I would also be surprised if a bet of this size, between two members of a rationalist website, paid off to either side (though I guess paying off as a donation to SIAI would not be so surprising). At this point though, I am guessing the bet will not go through.
Was there supposed to be a time limit on that bet offer? It seems like as long as the offer is available you and everyone else will have an incentive not to show all the evidence as a fully-informed betting opponent is less profitable.
Can you please talk more about the word “immortal?” As nothing in physics can make someone immortal, as far as I know, did you mean truly immortal, or long lived, or do you think it likely science will advance and make immortality possible, or what?
...Poor choice of words based on EY’s goals (which are just as poorly-stated).
Allow me to invent (or put under the microscope a slight, existing) distinction.
“Poorly stated”—not explicit, without fixed meaning. The words written may mean any of several things.
“Poorly worded”—worded so as to mean one thing which is wrong, perhaps even obviously wrong, in which case the writer may intend for people to assume he didn’t mean the obviously wrong thing, but instead meant the less literal, plausibly correct thing.
I have several times criticized the use of the words “immortal” and “immortality” by several people, including EY. I agree with the analysis by Robin Hanson here, in which he argues that the word “immortality” distracts from what people actually intend.
I characterize the use of “immortality” on this site as frequently obviously wrong in many contexts in which it is used, in which it is intended to mean the near thing “living a very long time and not being as fragile as humans are now.” In other words, often it is a poor wording of clear concepts.
I’m not sure if you agree, or instead think that the goal of very long life is unclear, or poorly justified, or just wrong, or perhaps something else.
Yeah, good point. That makes sense.
As far as I understand, EY believes that humans and/or AIs will be able to survive until at least the heat death of the Universe, which would render such entities effectively immortal (i.e., as immortal as it is possible to be). That said, I do agree with your assessment.
If someone believed that no human and/or AI will ever be able to last longer than 1,000 years—perhaps any mind goes mad at that age, or explodes due to a law of the universe dealing with mental entities, or whatever—that person would be lambasted for using “immortal” to mean beings “as immortal as it is possible to be in my opinion.”
It is unfortunate that we don’t have clearer single words for the more plausible, more limited alternatives, closer to
Come to think of it, if de Grey’s SENS program actually succeeded, we’d get the “living a very long time” but not the “not being as fragile as humans are now” so we could use terms to distinguish those.
And all of the variations on these are distinct from uploading/ems, with the possibility of distributed backups
Unfortunately, I suspect that neither of these is very likely to ultimately happen. SENS has curing cancer as a subtask. Uploading/ems requires a scanning technology fast enough to scan a whole human brain and fine-grained enough to distinguish synapse types. I think other events will happen first.
(Waves to Clippy)
Welcome, its fun to have you here.
So, the next thing, I think you should avoid this religion-topic here. I mean, you are allowed to continue about it, but I fear you are gonna wear yourself out by doing that. I think there are better topics to discuss, where both you and LW have chance to learn new and change their opinions. Learning new is refreshing, discussions about religion rarely are that.
Admittedly, I think that there is no god, but also I’m not thinking anyone here convinces you of that. I think you actually have higher chance of converting someone here than someone here converting you.
So, come, share some of your thoughts about what is LW doing wrong, or just partake discussions here and there you find interesting. Welcome!
Hmm
You know, I was right.
You guys are fine and all, but I’m not cut out for this. I’m not smart enough or thick-skinned enough or familiar enough with various things to be a part of this community. It’s not you, it’s me, for real, I’m not saying that to make you feel better or something. I’ve only made you all confused and upset, and I know it’s draining for me to participate in these discussions.
See you.
Stick around. Your contributions are fine. Not everyone will be accusatory like nyan_sandwich.
Read through the Sequences and comment on what seems good to you.
It’s fine, I’m not pitching a fit about a little crudeness. I really can take it… or I can stay involved, but I don’t think I can do both, unlike some people (like maybe you) who are without a doubt better at some things than I am. Don’t blame him for chasing me off, I know the community is welcoming.
And I’m not really looking for reassurance. Maybe I’ll sleep on it for a while, but I really don’t think I’m cut out for this. That’s fine with me, I hope it’s fine with you too. I might try to hang around the HP:MoR thread, I don’t know, but this kind of serious discussion requires skills I just don’t have.
All of that said, I really appreciate that sweet comment. Thank you.
I hope you’re not seeing the options as “keep up with all the threads of this conversation simultaneously” or “quit LW”. It’s perfectly OK to leave things hanging and lurk for a while. (If you’re feeling especially polite, you can even say that you’re tapping out of the conversation for now.)
(Hmm, I might add that advice to the Welcome post...)
Okay. I’m tapping out of everything indefinitely. Thank you.
But remember, fixing this sort of problem is ostensibly what we’re here for.
If we fail at that for reasons you can articulate, I at least would like to know.
Education is ostensibly what high school teachers are there for, but if a student shows up who can’t read, they don’t blame themselves because they’re not there to teach basic skills like that.
.
Good questions.
Interesting, how comfortable are you with the concept of being immortal but being under the yoke of an immortal whimsical tyrant? Do you not see the irony at all? Besides I think you’ll find indefinite life extension as the more appropriate term.
There are places for this debate and they’re not this thread. You’re being rude.
My apologies. Interesting questions none the less.
And more disappointingly, confirming what should have been completely off-the-mark predictions about what reception Knitter would get as a Christian. I confess myself surprised.
Hi, Knitter. What does EC stand for again?
The boring explanation is that Laoch was taught as the feet of PZ Myers and Hitchens, who operate purely in places open for debate (atheist blogs are not like dinner tables); talk about the arguments of religious people not to them, but to audiences already sympathetic to atheism, and thus care little about principles of charity; and have a beef with religion-as-harmful-organization (e.g. “Hassidic Judaism hurts queers!”) and rather often with religious-people-as-outgroup-members (e.g. “Sally says abortion is murder because she’s trying to manipulate me!”), which interferes with their beef with religion-as-reasoning-mistake (e.g. “Sadi thinks he can derive knowledge in ways that violate thermodynamics!”).
The reading-too-much-HPMOR explanation is that Laoch is an altruistic Slytherin, who wants Knitter to think: “This is a good bunch. Not only are most people nice, but they can swiftly punish jerks. And there are such occasional jerks—I don’t have to feel silly about expecting a completely different reaction than I got, it was because bad apples are noisier.”.
I would have thought there ain’t no such critter as “too much MoR”, but after seeing that theory… ;)
It stands for evaporative cooling and I’m not offended. It’s a pretty valid point.
(Laoch: I expect God not to abuse his power, hence I wouldn’t classify him as a whimsical tyrant. And part of my issue is with being turned into a computer, which sounds even worse than making a computer that acts like me and thinks it is me.)
I can’t decide which of MixedNuts’s hypotheses is more awesome.
I’d be interested to hear more about your understanding of what a computer is, that drives your confidence that being turned into one is a bad thing.
Relatedly, how confident are you that God will never make a computer that acts like you and thinks it is you? How did you arrive at that confidence?
(this is totally off-topic, but is there a “watch comment” feature hiddent around the LW UI somewhere ? I am also interested to see AspiringKnitter’s opinion on this subject, but just I know I’ll end up losing track of it without technological assistance...)
Every LW comment has its own RSS feed. You can find it by going to the comment’s permalink URL and then clicking on “Subscribe to RSS Feed” from the right column or by adding ”/.rss” to the end of the aforementioned URL, whichever is easier for you. The grandparent’s RSS feed is here.
Not that I know of, but http://lesswrong.com/user/AspiringKnitter/ is one way to monitor that if you like.
For one thing, I’m skeptical that an em would be me, but aware that almost everyone here thinks it would be. If it thought it was me, and they thought it was me, but I was already dead, that would be really bad. And if I somehow wasn’t dead, there could be two of us and both claiming to be the real person. God would never blunder into it by accident believing he was prolonging my life.
And if it really was me, and I really was a computer, whoever made the computer would have access to all of my brain and could embed whatever they wanted in it. I don’t want to be programmed to, just as an implausible example, worship Eliezer Yudkowsky. More plausibly, I don’t want to be modified without my consent, which might be even easier if I were a computer. (For God to do it, it would be no different from the current situation, of course. He has as much access to my brain as he wants.)
And if the computer was not me but was sentient (wouldn’t it be awful if we created nonsentient ems that emulated everyone and ended up with a world populated entirely by beings with no qualia that pretend to be real people?), then I wouldn’t want it to be vulnerable to involuntary modification, either. I’d feel a great deal of responsibility for it if I were alive, and if I were not alive, then it would essentially be the worst of both worlds. God doing this would not expose it to any more risk than all other living beings.
Does this seem rational to you, or have I said something that doesn’t make sense?
I’m going to scoop TheOtherDave on this topic, I hope he doesn’t mind :-/
But first of all, who do you mean by “an em” ? I think I know the answer, but I want to make sure.
From my perspective, a machine that thinks it is me, and that behaves identically to myself, would, in fact, be myself. Thus, I could not be “already dead” under that scenario, until someone destroys the machine that comprises my body (which they could do with my biological body, as well).
There are two scenarios I can think of that help illustrate my point.
1). Let’s pretend that you and I know each other relatively well, though only through Less Wrong. But tomorrow, aliens abduct me and replace me with a machine that makes the same exact posts as I normally would. If you ask this replica what he ate for breakfast, or how he feels about walks on the beach, or whatever, it will respond exactly as I would have responded. Is there any test you can think of that will tell you whether you’re talking to the real Bugmaster, or the replica ? If the answer is “no”, then how do you know that you aren’t talking to the replica at this very moment ? More importantly, why does it matter ?
2). Let’s say that a person gets into an accident, and loses his arm. But, luckily, our prosthetic technology is superb, and we replace his arm with a perfectly functional prosthesis, indistinguishable from the real arm (in reality, our technology isn’t nearly as good, but we’re getting there). Is the person still human ? Now let’s say that one of his eyes gets damaged, and similarly replaced. Is the person still human ? Now let’s say that the person has epilepsy, but we are able to implant a chip in his brain that will stop the epileptic fits (such implants do, in fact, exist). What if part of the person’s brain gets damaged—let’s say, the part that’s responsible for color perception—but we are able to replace it with a more sophisticated chip. Is the person still human ? At what point do you draw the line from “augmented human” to “inhuman machine”, and why do you draw the line just there and not elsewhere ?
Two copies of me would both be me, though they would soon begin to diverge, since they would have slightly different perceptions of the world. If you don’t believe that two identical twins are the same person, why would you believe that two copies are ?
Sure, it might be, or it might not; this depends entirely on implementation. Today, there exist some very sophisticated encryption algorithms that safeguard valuable data from modification by third parties; I would assume that your mind would be secured at least as well. On the flip side, your (and mine, and everyone else’s) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won’t necessarily be a step down.
So, you don’t want your mind to be modified without your consent, but you give unconditional consent to God to do so ?
I personally would answer “no”, because I believe that the concept of qualia is a bit of a red herring. I might be in the minority on this one, though.
That’s a REALLY good response.
An em would be a computer program meant to emulate a person’s brain and mind.
If you create such a mind that’s just like mine at this very moment, and take both of us and show the construct something, then ask me what you showed the construct, I won’t know the answer. In that sense, it isn’t me. If you then let us meet each other, it could tell me something.
Because this means I could believe that Bugmaster is comfortable and able to communicate with the world via the internet, but it could actually be true that Bugmaster is in an alien jail being tortured. The machine also doesn’t have Bugmaster’s soul—it would be important to ascertain whether or not it did have a soul, though I’d have some trouble figuring out a test for that (but I’m sure I could—I’ve already got ideas, pretty much along the lines of “ask God”)-- and if it doesn’t, then it’s useless to worry about preaching the Gospel to the replica. (It’s probably useless to preach it to Bugmaster anyway, since Bugmaster is almost certainly a very committed atheist.) This has implications for, e.g., reunions after death. Not to mention that if I’m concerned about the state of Bugmaster’s soul, I should worry about Bugmaster in the alien ship. And if both of them (the replica and the real Bugmaster) accept Jesus (a soulless robot couldn’t do that), it’s two souls saved rather than one.
That’s a really good question. How many grains of sand do you need to remove from a heap of sand for it to stop being a heap? I suppose what matters is whether the soul stays with the body. I don’t know where the line is. I expect there is one, but I don’t know where it is.
Of course, what do we mean by “inhuman machine” in this case? If it truly thought like a human brain, and FELT like a human, was really sentient and not just a good imitation, I’d venture to call it a real person.
And who does the programming and encrypting? That only one person (who has clearly not respected my wishes to begin with since I don’t want to be a computer, so why should xe start now?) can alter me at will to be xyr peon does not actually make me feel significantly better about the whole thing than if anyone can do it.
I feel like being sarcastic here, but I remembered the inferential distance, so I’ll try not to. There’s a difference between a human, whose extreme vulnerability to corruption has been extensively demonstrated, and who doesn’t know everything, and may or may not love me enough to die for me… and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. This bothers me a lot less than an omniscient person without God’s character. (God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he’d modify a human against the human’s will.)
True. I consider the risk unacceptably high. I just think it’d be even worse as a computer. We have to practice our critical thinking as well as we can and avoid mind-altering chemicals like drugs and coffee. (I suppose you don’t want to hear me say that we have to pray for discernment, too?) A core tenet of utilitarianism is that we compare possibilities to alternatives. This is bad. The alternatives are worse. Therefore, this the best.
I realize that theological debate has a pretty tenuous connection to the changing of minds, but sometimes one is just in the mood.…
Suppose that tonight I lay I minefield all around your house. In the morning, I tell you the minefield is there. Then I send my child to walk through it. My kid gets blown up, but this shows you a safe path out of your house and allows you to go about your business. If I then suggest that you should express your gratitude to me everyday for the rest of your life, would you think that reasonable?.… According to your theology, was hell not created by God?
I once asked my best friend, who is a devout evangelical, how he could be sure that the words of the Bible as we have it today are correct, given the many iterations of transcription it must have gone through. According to him, God’s general policy of noninterference in free will didn’t preclude divinely inspiring the writers of the Bible to trancribe it inerrantly. At least according to one thesist’s account, then, God was willing to interfere as long it was something really important for man’s salvation. And even if you don’t agree with that particular interpretation, I’d like to hear your explanation how the points at which God “hardened Pharaoh’s heart”, for example, don’t amount to interfering with free will.
I have nothing to say to your first point because I need to think that over and study the relevant theology (I never considered that God made hell and now I need to ascertain whether he did before I respond or even think about responding, a question complicated by being unsure of what hell is). With regard to your second point, however, I must cordially disagree with anyone who espouses the complete inerrancy of all versions of the Bible. (I must disagree less cordially with anyone who espouses the inerrancy of only the King James Version.) I thought it was common knowledge that the King James Version suffered from poor translation and the Vulgate was corrupt. A quick glance at the disagreements even among ancient manuscripts could tell you that.
I suppose if I complain about people with illogical beliefs making Christianity look bad, you’ll think it’s a joke...
I don’t really have a dog in this race. That said, Matthew 25:41 seems to point in that direction, although “prepared” is perhaps a little weaker than “made”. It does seem to imply control and deliberate choice.
That’s the first passage that comes to mind, anyway. There’s not a whole lot on Hell in the Bible; most of the traditions associated with it are part of folk as opposed to textual Christianity, or are derived from essentially fanfictional works like Dante’s or Milton’s.
That made me laugh. Calling Dante “fanfiction” of the Bible was just so unexpected and simultaneously so accurate.
Upvoted for self-awareness.
The more general problem, of course, is that if you don’t believe in textual inerrancy (of whatever version of the Bible you happen to prefer), you still aren’t relying on God to decide which parts are correct.
As Prismattic said, if you discard inerrancy, you run into the problem of classifications. How do you know which parts of the Bible are literally true, which are metaphorical, and which have been superseded by the newer parts ?
I would also add that our material world contains many things that, while they aren’t as bad as Hell, are still pretty bad. For example, most animals eat each other alive in order to survive (some insects do so in truly terrifying ways); viruses and bacteria ravage huge swaths of the population, human, animal and plant alike; natural disasters routinely cause death and suffering on the global scale, etc. Did God create all these things, as well ?
That’s not a very good argument. “If you accept some parts are metaphorical, how do you know which are?” is, but if you only accept transcription and translation errors, you just treat it like any other historical document.
My bad; for some reason I thought that when AK said,
She meant that some parts of the Bible are not meant to be taken literally, but on second reading, it’s obvious that she is only referring to transcription and translation errors, like you said. I stand corrected.
Well, that really depends on what your translation criteria are. :) Reading KJV and, say, NIV side-by-side is like hearing Handel in one ear and Creed in the other.
When I feel the urge, I go to r/debatereligion. The standards of debate aren’t as high as they are here, of course; but I don’t have to feel guilty about lowering them.
Upvoted for dismissing the inclination to respond sarcastically after remembering the inferential distance.
That’s what I thought, cool.
Agreed; that is similar to what I meant earlier about the copies “diverging”. I don’t see this as problematic, though—after all, there currently exists only one version of me (as far as I know), but that version is changing all the time (even as I type this sentence), and that’s probably a good thing.
Ok, that’s a very good point; my example was flawed in this regard. I could’ve made the aliens more obviously benign. For example, maybe the biological Bugmaster got hit by a bus, but the aliens snatched up his brain just in time, and transcribed it into a computer. Then they put that computer inside of a perfectly realistic synthetic body, so that neither Bugmaster nor anyone else knows what happened (Bugmaster just thinks he woke up in a hospital, or something). Under these conditions, would it matter to you that you were talking to the replica or the biological Bugmaster ?
But, in the context of my original example, with the (possibly) evil aliens: why aren’t you worried that you are talking to the replica right at this very moment ?
I agree that the issue of the soul would indeed be very important; if I believed in souls, as well as a God who answers specific questions regarding souls, I would probably be in total agreement with you. I don’t believe in either of those things, though. So I guess my next two questions would be as follows:
a). Can you think of any non-supernatural reasons why an electronic copy of you wouldn’t count as you, and/or
b). Is there anything other than faith that causes you to believe that souls exist ?
If the answers to (a) and (b) are both “no”, then we will pretty much have to agree to disagree, since I lack faith, and faith is (probably) impossible to communicate.
Well, yes, preaching to me or to any other atheist is very unlikely to work. However, if you manage to find some independently verifiable and faith-independent evidence of God’s (or any god’s) existence, I’d convert in a heartbeat. I confess that I can’t imagine what such evidence would look like, but just because I can’t imagine it doesn’t mean it can’t exist.
Do you believe that a machine could, in principle, “feel like a human” without having a soul ? Also, when you say “feel”, are you implying some sort of a supernatural communication channel, or would it be sufficient to observe the subject’s behavior by purely material means (f.ex. by talking to him/it, reading his/its posts, etc.) in order to obtain this feeling ?
That’s a good point: if you are trusting someone with your mind, how do you know they won’t abuse that trust ? But this question applies to your biological brain, as well, I think. Presumably, there exist people whom you currently trust; couldn’t the person who operates the mind transfer device earn your trust in a similar way ?
Oh, in that scenario, obviously you shouldn’t trust anyone who wants to upload your mind against your will. I am more interested in finding out why you don’t want to “be a computer” in the first place.
You’re probably aware of this already, but just in case: atheists (myself included) would say (at the very minimum) that your first sentence contains logical contradictions, and that your second sentence is contradicted by evidence and most religious literature, even if we assume that God does exist. That is probably a topic for a separate thread, though; I acknowledge that, if I believed what you do about God’s existence and his character, I’d agree with you.
Guilty as charged; I’m drinking some coffee right now :-/
I only want to hear you say things that you actually believe...
That said, let’s assume that your electronic brain would be at least as resistant to outright hacking as your biological one. IMO this is a reasonable assumption, given what we currently know about encryption, and assuming that the person who transferred your brain into the computer is trustworthy. Anyway, let’s assume that this is the case. If your computerized mind under this scenario was able to think faster, and remember more, than your biological mind; wouldn’t that mean that your critical skills would greatly improve ? If so, you would be more resistant to persuasion and indoctrination, not more so.
Okay, but if both start out as me, how do we determine which one ceases to be me when they diverge? My answer would be the one who was here first is me, which is problematic because I could be a replica, but only conditional on machines having souls or many of my religious beliefs being wrong. (If I learn that I am a replica, I must update on one of those.)
Besides being electronic and the fact that I might also be currently existing (can there be two ships of Theseus?), no. Oh, wait, yes; it SHOULDN’T count as me if we live in a country which uses deontological morality in its justice system. Which isn’t really the best idea for a justice system anyway, but if so, then it’s hardly fair to treat the construct as me in that case because it can’t take credit or blame for my past actions. For instance, if I commit a crime, it shouldn’t be blamed if it didn’t commit the crime. (If we live in a sensible, consequentialist society, we might still want not to punish it, but if everyone believes it’s me, including it, then I suppose it would make sense to do so. And my behavior would be evidence about what it is likely to do in the future.)
If by “faith” you mean “things that follow logically from beliefs about God, the afterlife and the Bible” then no.
No, but it could act like one.
When I say “feel like a human” I mean “feel” in the same way that I feel tired, not in the same way that you would be able to perceive that I feel soft. I feel like a human; if you touch me, you’ll notice that I feel a little like bread dough. I cannot perceive this directly, but I can observe things which raise the probability of it.
But something acting like a person is sufficient reason to treat it like one. We should err on the side of extending kindness where it’s not needed, because the alternative is to err on the side of treating people like unfeeling automata.
Since I can think of none that I trust enough to, for instance, let them chain me to the wall of a soundproof cell in the wall of their basement, I feel no compulsion to trust anyone in a situation where I would be even more vulnerable. Trust has limits.
I’m past underestimating you enough not to know that. I’m aware that believing something is a necessary condition for saying it; I just don’t know if it’s a sufficient condition.
Those are some huge ifs, but okay.
Yes, and if we can prove that my soul would stay with this computer (as opposed to a scenario where it doesn’t but my body and physical brain are killed, sending the real me to heaven about ten decades sooner than I’d like, or a scenario where a computer is made that thinks like me only smarter), and if we assume all the unlikely things stated already, and if I can stay in a corporeal body where I can smell and taste and hear and see and feel (and while we’re at it, can I see and hear and smell better?) and otherwise continue being the normal me in a normal life and normal body (preferably my body; I’m especially partial to my hands), then hey, it sounds neat. That’s just too implausible for real life.
EDIT: oh, and regarding why I’m not worried now, it’s because I think it’s unlikely for it to happen right now.
So… hm.
So if I’m parsing you correctly, you are assuming that if an upload of me is created, Upload_Dave necessarily differs from me in the following ways:
it doesn’t have a soul, and consequently is denied the possibility of heaven,
it doesn’t have a sense of smell, taste, hearing, sight, or touch,
it doesn’t have my hands, or perhaps hands at all,
it is easier to hack (that is, to modify without its consent) than my brain is.
Yes?
Yeah, I think if I believed all of that, I also wouldn’t be particularly excited by the notion of uploading.
For my own part, though, those strike me as implausible beliefs.
I’m not exactly sure what your reasons for believing all of that are… they seem to come down to a combination of incredulity (roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties) and that they contradict your existing religious beliefs. Have I understood you?
I can see where, if I had more faith than I do in the idea that computer programs will always be more or less like they are now, and in the idea that what my rabbis taught me when I was a child was a reliable description of the world as it is, those beliefs about computer programs would seem more plausible.
Mostly.
More like “it doesn’t have a soul, therefore there’s nothing to send to heaven”.
I have a great deal of faith in the ability of computer programs to surprise me by using ever-more-sophisticated algorithms for parsing data. I don’t expect them to feel. If I asked a philosopher what it’s like for a bat to be a bat, they’d understand the allusion I’d like to make here, but that’s awfully jargony. Here’s an explanation of the concept I’m trying to convey.
I don’t know whether that’s something you’ve overlooked or whether I’m asking a wrong question.
If it helps, I’ve read Nagel, and would have gotten the bat allusion. (Dan Dennett does a very entertaining riff on “What is it like to bat a bee?” in response.)
But I consider the physics of qualia to be kind of irrelevant to the conversation we’re having.
I mean, I’m willing to concede that in order for a computer program to be a person, it must be able to feel things in italics, and I’m happy to posit that there’s some kind of constraint—label it X for now—such that only X-possessing systems are capable of feeling things in italics.
Now, maybe the physics underlying X is such that only systems made of protoplasm can possess X. This seems an utterly unjustified speculation to me, and no more plausible than speculating that only systems weighing less than a thousand pounds can possess X, or only systems born from wombs can possess X, or any number of similar speculations. But, OK, sure, it’s possible.
So what? If it turns out that a computer has to be made of protoplasm in order to possess X, then it follows that for an upload to be able to feel things in italics, it has to be an upload running on a computer made of protoplasm. OK, that’s fine. It’s just an engineering constraint. It strikes me as a profoundly unlikely one, as I say, but even if it turns out to be true, it doesn’t matter very much.
That’s why I started out by asking you what you thought a computer was. IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.
“IF people have to be made of protoplasm, AND IF computers can’t be made of protoplasm, THEN people can’t run on computers… but not only do I reject the first premise, I reject the second one as well.”
Does it matter?
What if we can run some bunch of algorithms on a computer that pass the turing test but are provably non-sentient? When it comes down to it we’re looking for something that can solve generalized problems willingly and won’t deliberately try to kill us.
It’s like the argument against catgirls. Some people would prefer to have human girls/boys but trust me sometimes a catgirl/boy would be better.
It matters for two things:
1) If we are trying to upload (the context here, if you follow the thread up a bit), then we want the emulations to be alive in whatever senses it is important to us that we are presently alive.
2) If we are building a really powerful optimization process, we want it not to be alive in whatever senses make alive things morally relevant, or we have to consider its desires as well.
OK fair enough if you’re looking for uploads. Personally I don’t care as I take the position that the upload concept isn’t really me, it’s a simulated me in the same way that a “spirit version of me” i.e. soul isn’t really me either.
Please correct my logic if I’m wrong here: in order to take the position that an upload is provably you, the only feasible way to do the test is have other people verify that it’s you. The upload saying it’s you doesn’t cut it and neither does the upload just acting exactly like you cut it. In other words the test for whether an upload is really you doesn’t even require it to be really you just simulate you exactly. Which means that the upload doesn’t need to be sentient.
Please fill in the blanks in my understanding so I can get where you’re coming from (this is a request for information not sarcastic).
I endorse dthomas’ answer in the grandparent; we were talking about uploads.
I have no idea what to do with word “provably” here. It’s not clear to me that I’m provably me right now, or that I’ll be provably me when I wake up tomorrow morning. I don’t know how I would go about proving that I was me, as opposed to being someone else who used my body and acted just like me. I’m not sure the question even makes any sense.
To say that other people’s judgments on the matter define the issue is clearly insufficient. If you put X in a dark cave with no observers for a year, then if X is me then I’ve experienced a year of isolation and if X isn’t me then I haven’t experienced it and if X isn’t anyone then no one has experienced it. The difference between those scenarios does not depend on external observers; if you put me in a dark cave for a year with no observers, I have spent a year in a dark cave.
Mostly, I think that identity is a conceptual node that we attach to certain kinds of complex systems, because our brains are wired that way, but we can in principle decompose identity to component parts—shared memory, continuity of experience, various sorts of physical similarity, etc. -- without anything left over. If a system has all those component parts—it remembers what I remember, it remembers being me, it looks and acts like me, etc. -- then our brains will attach that conceptual node to that system, and we’ll agree that that system is me, and that’s all there is to say about that.
And if a system shares some but not all of those component parts, we may not agree whether that system is me, or we may not be sure if that system is me, or we may decide that it’s mostly me.
Personal identity is similar in this sense to national identity. We all agree that a child born to Spaniards and raised in Spain is Spanish, but is the child of a Spaniard and an Italian who was born in Barcelona and raised in Venice Spanish, or Italian, or neither, or both? There’s no way to study the child to answer that question, because the child’s national identity was never an attribute of the child in the first place.
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don’t take the position that verifying an upload is a solved problem, or even that it’s necessarily ever going to be feasible.
That said, consider the following hypothetical process:
You are hooked up to sensors monitoring all of your sensory input.
We scan you thoroughly.
You walk around for a year, interacting with the world normally, and we log data.
We scan you thoroughly.
We run your first scan through our simulation software, feeding it the year’s worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
That is a very good response and my answer to you is:
I don’t know AND
To me it doesn’t matter as I’m not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I’m not saying you’re wrong. I just don’t know and I don’t think it’s knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it’s sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
Nice thought experiment.
No I probably would not consent to being non-destructively scanned so that my simulated version could be evilly manipulated.
Regardless of whether it’s sentient or not provably so.
You make sense. I’m starting to think a computer could potentially be sentient. Isn’t a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
I personally believe that humans are likewise machines, generally made of meat, that run “programs”. I put the word “programs” in scare-quotes because our programs are very different in structure from computer programs, though the basic concept is the same.
What we have in common with computers, though, is that our programs are self-modifying. We can learn, and thus change our own code. Thus, I see no categorical difference between humans and computers, though obviously our current computers are far inferior to humans in many (though not all) areas.
That’s a perfectly workable model of a computer for our purposes, though if we were really going to get into this we’d have to further explore what a circuit is.
Personally, I’ve pretty much given up on the word “sentient”… in my experience it connotes far more than it denotes, such that discussions that involve it end up quickly reaching the point where nobody quite knows what they’re talking about, or what talking about it entails. I have the same problem with “qualia” and “soul.” (Then again, I talk comfortably about something being or not being a person, which is just as problematic, so it’s not like I’m consistent about this.)
But that aside, yeah, if any physical thing can be sentient, then I don’t see any principled reason why a computer can’t be. And if I can be implemented in a physical thing at all, then I don’t see any principled reason why I can’t be implemented in a computer.
Also (getting back to an earlier concern you expressed), if I can be implemented in a physical thing, I don’t see any principled reason why I can’t be implemented in two different physical things at the same time.
I agree Dave. Also I’ll go further. For my own personal purposes I care not a whit if a powerful piece of software passes the Turing test, can do cool stuff, won’t kill me but it’s basically an automaton.
I would go one step further, and claim that if a piece of software passes the general Turing test—i.e., if it acts exactly like a human would act in its place—then it is not an automaton.
… over some sufficiently broad set of places.
Heh, yes, good point.
And I’d say that taking that step is a point of philosophy.
Consider this: I have a dodge durango sitting in my garage.
If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I’d say no, but the point is irrelevant.
Why not, and why is it irrelevant ? For example, if your car gets stolen, and later returned to you, wouldn’t you want to know whether you actually got your own car back ?
I have to admit, your response kind of mystified me, so now I’m intrigued.
Very good questions.
No I’d not particularly care if it was my car that was returned to me because it gives me utility and it’s just a thing.
I’d care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I’d probably be creeped out but if I didn’t know I’d just be so glad to have my “wife” back.
In the case of the simulated porn actress, I wouldn’t really care if she was real because her utility for me would be similar to watching a movie. Once done with the simulation she would be shut off.
That said the struggle would be with whether or not she (the catgirl version of porn actress) was truly sentient. If she was truly sentient then I’d be evil in the first place because I’d be coercing her to do evil stuff in my personal simulation but I think there’s no viable way to determine sentience other than “if it walks like a duck and talks like a duck” so we’re back to the beginning again and THUS I say “it’s irrelevant”.
My primary concern in a situation like this is that she’d be kidnapped and presumably extremely not happy about that.
If my partner were vaporized in her sleep and then replaced with a perfect simulacrum, well, that’s just teleporting (with less savings on airfare.) If it were a known fact that sometimes people died and were replaced by cylons, finding out someone had been cyloned recently, or that I had, wouldn’t particularly bother me. (I suppose this sounds bold, but I’m almost entirely certain that after teleporters or perfect destructive uploads or whatever were introduced, interaction with early adopters people had known before their “deaths” would rapidly swing intuitions towards personal identity being preserved. I have no idea how human psychology would react to there being multiple copies of people.)
I expect we’d adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
The closest analogy I can think of is if I lived in a culture where families only had one child each, and was suddenly introduced to brothers. It would be strange to find two people who shared parents, a childhood environment, and so forth—attributes I was accustomed to treating as uniquely associated with a person, but it turned out I was wrong to do so. It would be disconcerting, but I expect I’d get used to it.
If you count a fertilized egg as a person, then two identical twins did use to be the same person. :-)
And chimeras used to be two different people.
While I don’t doubt that many people would be OK with this I wouldn’t because of the lack of certainty and provability.
My difficulty with this concept goes further. Since it’s not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?
“Oh but the copies running in a simulation are the same thing as the originals really”, protests the AI after all the humans have been destructively scanned and copied into a simulation...
That shouldn’t happen as long as the AI is friendly—it doesn’t want to destroy people.
But is it destroying people if the simulations are the same as the original?
There are a few interesting possibilities here:
1) The AI and I agree on what constitutes a person. In that case, the AI doesn’t destroy anything I consider a person.
2) The AI considers X a person, and I don’t. In that case, I’m OK with deleting X, but the AI isn’t.
3) I consider X a person, and the AI doesn’t. In that case, the AI is OK with deleting X, but I’m not.
You’re concerned about scenario #3, but not scenario #2. Yes?
But in scenario #2, if the AI had control, a person’s existence would be preserved, which is the goal you seem to want to achieve.
This only makes sense to me if we assume that I am always better at detecting people than the AI is.
But why would we assume that? It seems implausible to me.
Ha Ha. You’re right. Thanks for reflecting that back to me.
Yes if you break apart my argument I’m saying exactly that though I hadn’t broken it down to that extent before.
The last part I disagree with which is that I assume that I’m always better at detecting people than the AI is. Clearly I’m not but in my own personal case I don’t trust it if it disagrees with me because of simple risk management. If it’s wrong and it kills me then resurrects a copy then I have experienced total loss. If it’s right then I’m still alive.
But I don’t know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I’d prefer not to take the risk of personal destruction.
That said if someone chose to destructively scan themselves to upload that would be their personal choice.
Well, I certainly agree that all else being equal we ought not kill X if there’s a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that.
But if for whatever reason I’m in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I’m the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI’s judgment, or less.
And obviously that’s going to depend on the particulars of X, Y, me, and the AI… but it’s certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
I think we’re on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There’s no way to distinguish the two of them from physical properties but there are two of them not one.
Regardless, if you believe they are the same person then you go first through the teleportation device… ;->
In Identity Isn’t In Specific Atoms, Eliezer argued that even from what you called the “physical science perspective,” the two electrons are ontologically the same entity. What do you make of his argument?
What do I make of his argument? Well I’m not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn’t scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even “teleporting” one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it’s just that you cannot even distinguish the original from itself because the states change each time you measure them.
A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.
That said, we’re talking about a single object here. As soon as you go to comparing more than one single object it’s not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.
I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.
From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.
From a compsci perspective, talking about the position and momentum of instances of classes doesn’t make any sense. The two instances of the classes ARE the same because they are logically the same.
Anyways I’ve segwayed here: Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they’re not a single electron. If one of them is part of e.g. my brain and then it’s swapped out for the other then there’s no longer any way to tell which is which. It’s impossible. And my guess is this is what’s causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn’t mean that there are still two of them, there are now only one and one has been destroyed.
Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it’s the information content that’s important.
For me, sure if my information content lived on that would be better than nothing but it wouldn’t be me.
I wouldn’t take a destructive upload if I didn’t know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn’t cross the street if I didn’t know I wasn’t going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
Exactly. Reasonable assurance is good enough, absolute isn’t necessary. I’m not willing to be destructively scanned even if a copy of me thinks it’s me, looks like me, and acts like me.
That said I’m willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don’t ask me to do it. And expect a bullet if you try to force me!
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority… for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. … what then?
This is a different point entirely. Sure it’s more efficient to just work with instances of similar objects and I’ve already said elsewhere I’m OK with that if it’s objects.
And if everyone else is OK with being destructively scanned then I guess I’ll have to eke out an existence as a savage. The economy can have my atoms after I’m dead.
Sorry I wasn’t clear—the sack of atoms I had in mind was the one comprising your body, not other objects.
Also, my point is that it’s not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit… yes?
Yes that’s right.
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it’s none of my business.
If what you’re really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we’re not there yet.
I understand that you won’t consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn’t what I asked.
I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
I thought I had answered but perhaps I answered what I read into it.
If you are asking “will I prevent you from gradually moving everything to digital perhaps including yourselves” then the answer is no.
I just wanted to clarify that we were talking about with consent vs without consent.
I agree completely that there are two bunches of matter in this scenario. There are also (from what you’re labeling the compsci perspective) two data structures. This is true.
My question is, why should I care? What value does the one on the left have, that the one on the right doesn’t have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people… but what makes that a source of value?
For example: to my way of thinking, what’s valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what’s valuable about the person. There are other things associated with them—for example, a particular set of atoms—but from my perspective that’s pretty valueless. If I lose the atoms while preserving the data, I don’t care. I can always find more atoms; I can always construct a new body. But if I lose the data, that’s the ball game—I can’t reconstruct it.
In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don’t care… I’ve kept what’s valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care… I’ve lost what’s valuable.
So when I look at a system to determine how many people are present in that system, what I’m counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren’t what’s valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what’s valuable about them with one copy of that data… I don’t need to lug a million bundles of atoms around.
So, as I say, that’s me… that’s what I value, and consequently what I think is important to preserve. You think it’s important to preserve the individual bundles, so I assume you value something different.
What do you value?
More particularly, you regularly change out your atoms.
That turns out to be true, but I suspect everything I say above would be just as true if I kept the same set of atoms in perpetuity.
I agree that it would still be true, but our existence would be less strong an example of it.
I understand that you value the information content and I’m OK with your position.
Let’s do another tought experiment then: Say we’re some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn’t want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it’s leisure later then would that be the same thing as the original living people?
I’d argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it’s just a model of them.
Now don’t get me wrong, a model like that would be very valuable, it just wouldn’t be the original.
And yes, of course some people value originals otherwise you wouldn’t have to pay millions of dollars for postage stamps printed in the 1800s even though I’d guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.
In the thought experiment you describe, they’ve preserved the data and not the patterns of interaction (that is, they’ve replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will.
If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control.
I understand that there’s something else in the original that you value, which I don’t… or at least, which I haven’t thought about. I’m trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn’t exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else?
Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process.
Thus far, the only answer I can infer from your responses is that you value being the original… or perhaps being the original, if that’s different… and the value of that doesn’t derive from anything, it’s just a primitive. Is that it?
If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you’d previously thought?
I guess from your perspective you could say that the value of being the original doesn’t derive from anything and it’s just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I’d be disappointed that I wasn’t the original but glad that I had existence.
OK.
Hmm
I find “if it walks like a duck and talks like a duck” to be a really good way of identifying ducks.
Agreed. It’s the only way we have of verifying that it’s a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
I’m not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity—the rock doesn’t care, and I’m not convinced a duck would care either. Personal identity, however, is a whole other thing—there’s this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren’t right it’s not the same person. If you make a perfect copy of a person and destroy the original, then it’s the same person. You’ve just teleported them—even if you can see the left over dust from the destruction. Being made of the “same” atoms, after all, has nothing to do with identity—atoms don’t have individual identities.
That’s a point of philosophical disagreement between us. Here’s why:
Take an individual.
Then take a cell from that individual. Grow it in a nutrient bath. Force it to divide. Rinse, wash, repeat.
You create a clone of that person.
Now is that clone the same as the original? No it is not. It is a copy. Or in a natural version of this, a twin.
Now let’s say technology exists to transfer memories and mind states.
After you create the clone-that-is-not-you you then put your memories into it.
If we keep the original alive the clone is still not you. How does killing the original QUICKLY make the clone you?
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”?
If there is some X that differs between those people, such that the label “me” applies to one value of X but not the other value, then talking about which one is “me” makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it.
But if there is no such X, then for as long as we continue talking about which of those people is “me,” we are not talking about anything in the world. Under those circumstances it’s best to set aside the question of which is “me.”
“(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label “me”? What conceivable difference does it make whether we label both of those people “me”″
Because we already have a legal precedent. Twins. Though their memories are very limited they are legally different people. My position is rightly so.
Identical twins, even at birth, are different people: they’re genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I’m not sure why you’re bringing this up in the first place: legalities don’t help us settle philosophical questions. At best they point to a formalization of the folk solution.
As best I can tell, you’re trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I’m not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I’m not a specialist in this area, though. What’s your reasoning?
Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
Here’s why I conclude a risk exists: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.
Yes, we have two people after this process has completed… I said that in the first place. What follows from that?
EDIT: Reading your other comments, I think I now understand what you’re getting at.
No, if we’re talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations.
But as soon as the person at those locations start to accumulate independent experiences, then we have two people.
Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven’t created a dozen new people, and if I delete all of those duplicates I haven’t destroyed any people.
The uniqueness of experience is important.
this follows: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me—it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied—but they’re not identical to each other, since they would start to have different experiences immediately. “Identical” here meaning “that same person as”—not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I’m not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn’t make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral. Tell me if I’m not being clear.
Regardless of what you believe you’re avoiding the interesting question: if you overwrite your clone’s memories and personality with your own, is that clone the same person as you? If not, what is still different?
I don’t think anyone doubts that a clone of me without my memories is a different person.
Right, but presumably, you would be unhappy if your Ferrari got stolen and you got a Yaris back. In fact, you might be unhappy even if your Yaris got stolen and you got a Ferrari back—wouldn’t you be ?
If the copy was so perfect that you couldn’t tell that it wasn’t your wife, no matter what tests you ran, then would you believe anyone who told you that this being was in fact a copy, and not your wife at all ?
I agree (I think), but then I am tempted to conclude that creating fully sentient beings merely for my own amusement is, at best, ethically questionable.
Really good discussion.
Would I believe? I think the answer would depend on whether I could find the original or not. I would, however, find it disturbing to be told that the copy was a copy.
And yes, if the beings are fully sentient then yes I agree it’s ethically questionable. But since we cannot tell then it comes down to the conscience of the individual so I guess I’m evil then.
Finding the original, and determining that it is, in fact, the original, would constitute a test you could run to determine whether your current wife is a replica or not. Thus, under our scenario, finding the original would be impossible.
Disturbing how ? Wouldn’t you automatically dismiss the person who tells you this as a crazy person ? If not, why not ?
Er… ok, that’s good to know. edges away slowly
Personally, if I encountered some beings who appeared to be sentient, I’d find it very difficult to force them to do my bidding (through brute force, or by overwriting their minds, or by any other means). Sure, it’s possible that they’re not really sentient, but why risk it, when the probability of this being the case is so low ?
You’re right. It is impossible to determine that the current copy is the original or not.
“Disturbing how?” Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I’d be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.
“edges away slowly” lol. Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function. I would fight to prevent the construction of an AI based on anything but the average utility function of humanity even if it excluded my own maximized utility function because I’m honest enough to say that maximizing my own personal utility function is not in the best interests of humanity. Even then I believe that producing an AI whose utility function is maximizing the best interests of humanity is incredibly difficult and thus have concluded that created an AI whose definition is just NOT(Unfriendly) and attempting to trade with it is probably far easier. Though I have not read Eliezer’s CEV paper so I require further input.
“difficult to force them to do my bidding”.
I don’t know if you enjoy video games or not. Right now there’s a 1st person shooter called Modern Warfare 3. It’s pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill—are automatons and we know for sure that they’re automatons. Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
What if the reconstitution process was so flawless that there was no possible test your wife could run to determine whether or not you’d been teleported in this matter ? Would you still be uncomfortable with the process ? If so, why, and how does it differ from the reversed situation that we discussed previously ?
Whoever that Phil guy is, I’m going to walk away briskly from him, as well. Walking backwards. So as not to break the line of sight.
I haven’t played that particular shooter, but I am reasonably certain that these NPCs wouldn’t come anywhere close to passing the Turing Test. Not even the dog version of the Turing Test.
I would say that, most likely, yes, it is murder.
I’m talking exactly about a process that is so flawless you can’t tell the difference. Where my concern comes from is that if you don’t destroy the original you now have two copies. One is the original (although you can’t tell the difference between the copy and the original) and the other is the copy.
Now where I’m uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Simply speeding the process up perhaps by vaporizing the original doesn’t make the outcome any different, the original is still dead.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I’d still be reluctant to do this myself.
That said, I’d be willing to become a hybrid organism slowly by replacing parts of me and although it wouldn’t be the original me at the end of the total replacement process it would still be the hybrid “me”.
Interesting position on the killing of the NPCs and in terms of usefulness that’s why it doesn’t matter to me if a being is sentient or not in order to meet my definition of AI.
If I make a perfect copy of myself, then at the instant of duplication there exists one person at two locations. A moment later, the entities at those two locations start having non-identical experiences and entering different mental states, and thereby become different people (who aren’t one another, although both of them are me). If prior to duplication I program a device to kill me once and only once, then I die, and I have killed myself, and I continue to live.
I agree that this is a somewhat confusing way of talking, because we’re not used to life and death and identity working that way, but we have a long history of technological innovations changing the way we talk about things.
I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.
Which one is which ? And why ?
OK, cool… I understand you, then.
Can you clarify what, if anything, is uniquely valuable about a person who is an exact copy of another person?
Or is this a case where we have two different people, neither of whom have any unique value?
Well, think of it this way: Copy A and Copy B are both Person X. Copy A is then executed. Person X is still alive because Copy B is Person X. Copy A is dead. Nothing inconsistent there—and you have a perfectly fine explanation for the presence of a dead body.
There is no such thing as “the same atoms”—atoms do not have individual identities.
I don’t think anyone was arguing that the AI needed to be conscious—intelligence and consciousness are orthogonal.
K here’s where we disagree:
Original Copy A and new Copy B are indeed instances of person X but it’s not a class with two instances as in CompSci 101. The class is Original A and it’s B that is the instance. They are different people.
In order to make them the same person you’d need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they’d be part of the same hybrid entity. But at no point are they the same person.
I don’t know why it matters which is the original—the only difference between the original and the copy is location. A moment after the copy happens, their mental states begin to diverge because they have different experiences, and they become different people to each other—but they’re both still Person X.
It matters to you if you’re the original and then you are killed.
You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).
Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I’m not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct—though willing to concede if there is ultimately some logical way to prove they are the same person.)
The reason is obvious. If they are the same person and one of them kills someone are both of them guilty? If one fathers a child, is the child the offspring of both of them?
Because of this I cannot agree beyond saying that the two different people are copies of person x. Even you are prepared to concede that they are different people to each other after the mental states begin to diverge so I can’t close the logical gap why you say they are the same person and not copies of the same person one being the original. You come partway to saying they are different people. Why not come all the way?
I agree with TheOtherDave. If you imagine that we scan someone’s brain and then run one-thousand simulations of them walking around the same environment, all having exactly the same experiences, it doesn’t matter if we turn one of those simulations off. Nobody’s died. What I’m saying is that the person is the mental states, and what it means for two people to be different people is that they have different mental states. I’m not really sure about the morality of punishing them both for the crimes of one of them, though. On one hand, the one who didn’t do it isn’t the same person as the one who did—they didn’t actually experience committing the murder or whatever. On the other hand, they’re also someone who would have done it in the same circumstances—so they’re dangerous. I don’t know.
You are decreasing the amount of that person that exists.
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Well, maybe. But there is a whole universe full of people who will never speak to you again and are left to grieve over your body.
Good point.
There is of course a difference between death and non-existence.
Yes, there is a measure of that person’s existence (number of perfect copies) which I’m reducing by deleting a perfect copy of that person. What I’m saying is precisely that I don’t care, because that is not a measure of people I value.
Similarly, if I gain 10 pounds, there’s a measure of my existence (mass) which I thereby increase. I don’t care, because that’s not a measure of people I value.
Neither of those statements is quite true, admittedly. For example, I care about gaining 10 pounds because of knock-on effects—health, vanity, comfort, etc. I care about gaining an identical backup because of knock-on effects—reduced risk of my total destruction, for example. Similarly, I care about gaining a million dollars, I care about gaining the ability to fly, there’s all kinds of things that I care about. But I assume that your point here is not that identical copies are valuable in some sense, but that they are valuable in some special sense, and I just don’t see it.
As far as MWI goes, yes… if you posit a version of many-worlds where the various branches are identical, then I don’t care if you delete half of those identical branches. I do care if you delete me from half of them, because that causes my loved ones in those branches to suffer… or half-suffer, if you like. Also, because the fact that those branches have suddenly become non-identical (since I’m in some and not the others) makes me question the premise that they are identical branches.
And this “amount” is measured by the number of simulations? What if one simulation is using double the amount of atoms (e.g. by having thicker transistors), does it count twice as much? What if one simulation double checks each result, and another does not, does it count as two?
The equivalence between copies spreads across the many-worlds and identical simulations running in the same world, is yet to be proven or disproven—and I expect it won’t be proven or disproven until we have some better understanding about the hard problem of consciousness.
Can’t speak for APMason, but I say it because what matters to me is the information.
If the information is different, and the information constitutes people, then it constitutes different people. If the information is the same, then it’s the same person. If a person doesn’t contain any unique information, whether they live or die doesn’t matter nearly as much to me as if they do.
And to my mind, what the law decides to do is an unrelated issue. The law might decide to hold me accountable for the actions of my 6-month-old, but that doesn’t make us the same person. The law might decide not to hold me accountable for what I did ten years ago, but that doesn’t mean I’m a different person than I was. The law might decide to hold me accountable for what I did ten years ago, but that doesn’t mean I’m the same person I was.
“If the information is different, and the information constitutes people, then it constitutes different people.”
True and therein lies the problem. Let’s do two comparisons: You have two copies. One the original, the other the copy.
Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.
Now let’s compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of observing the states (either by scanning it or by rebuilding it) changes it and thus on the quantum scale we have two different entities even though they are identical on the macro scale except for position and momentum.
Using your argument that it’s the information content that’s important, they don’t really have any useful differences from an information content especially not on the macro scale but they have significant differences in all of their non useful quantum states. They are physically different entities.
Basically what you’re talking about is using a lossy algorithm to copy the individuals. At the level of detail you care about they are the same. At a higher level of detail they are distinct.
I’m thus uncomfortable with killing one of them and then saying the person still exists.
So, what you value is the information lost during the copy process? That is, we’ve been saying “a perfect copy,” but your concern is that no copy that actually exists could actually be a perfect copy, and the imperfect copies we could actually create aren’t good enough?
Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?
“Again, just to be clear, what I’m trying to understand is what you value that I don’t. If data at these high levels of granularity is what you value, then I understand your objection. Is it?”
OK I’ve mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.
Since I value my own life I want to be sure that it’s actually me that’s alive if you plan to kill me. Because we’re basically creating an additional copy really quickly and then disposing of the original I have a hard time believing that we’re doing something equivalent to a single copy walking through a gate.
I don’t believe that just the information by itself is enough to answer the question “Is it the original me?” in affirmative. And given that it’s not even all of the information (though is all of the information on the macro scale) I know for a fact we’re doing a lossy copy. The quantum states are possibly irrelevant on a macro scale for determing is (A == B) but since I knew from physics that they’re not exactly equivalent once you go down to the quantum level I just can’t buy into it though things would be murkier if the quantum states were provably identical.
Does that answer your question?
Maybe?
Here’s what I’ve understood; let me know if I’ve misunderstood anything.
Suppose P is a person who was created and preserved in the ordinary way, with no funky hypothetical copy/delete operations involved. There is consequently something about P that you value… call that “something” X for convenience.
If P’ is a duplicate of P, then P’ does not possess X, or at least cannot be demonstrated to possess X.
This only applies to people; non-person objects either do not possess X in the first place, or if they do, it is possible in principle for a duplication process to create a duplicate that also possesses X.
X is preserved for P from one moment/day/year to the next, even though P’s information content—at a macroscopic level, let alone a quantum one—changes over time. I conclude that X does not depend on P’s information content at all, even on a macroscopic level, and all this discussion of preserving quantum states is a red herring.
By similar reasoning, I conclude that X doesn’t depend on atoms, since the atoms of which P is comprised change over time. The same is true of energy levels.
I don’t have any idea of what that X might actually be; since we’ve eliminated from consideration everything about people I’m aware of.
I’m still interested in more details about X, beyond the definitional attribute of “X is that thing P has that P’ doesn’t”, but I no longer believe I can elicit those details through further discussion.
EDIT: Yes, you did understand though I can’t personally say that I’m willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I think it’s an axiomatic difference Dave.
It appears from my side of the table that you’re starting from the axiom that all that’s important is information and that originality and/or physical existence including information means nothing.
And you’re dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between the two copies below the macro scale (and the position is different and the atoms are different—though unidentifiably so other than saying that the count is 2x rather than x of atoms) then it’s impossible to dismiss the question “Am I dying when I do this?” because your are making a lossy copy even from your standpoint. The only get-out clause is to say “it’s a close enough copy because the quantum states and position are irrelevant because we can’t measure the difference between atoms in two identical copies on the macro scale other than saying we’ve now got 2X the same atoms whereas before we had 1X).
It’s exactly analogous to a bacteria budding. The original cell dies and close to an exact copy is budded off a. If the daughter bacteria were an exact copy of the information content of the original bacteria then you’d have to say from your position that it’s the same bacteria and the original is not dead right? Or maybe you’d say that it doesn’t matter that the original died.
My response to that argument (if it were the line of reasoning you took—is it?) would be that “it matters volitionally—if the original didn’t want to die and it was forced to bud then it’s been killed).
I did not say the X is a red herring. If you believe I did, I recommend re-reading my comment.
The X is far from being a red herring; rather, the X is precisely what I was trying to elicit details about for a while. (As I said above, I no longer believe I can do so through further discussion.)
But I did say that identity of quantum states is a red herring.
As I said before, I conclude this from the fact that you believe you are the same person you were last year, even though your quantum states aren’t identical. If you believe that X can remain unchanged while Y changes, then you don’t believe that X depends on Y; if you believe that identity can remain unchanged while quantum states change, then you don’t believe that identity depends on quantum states.
To put this another way: if changes in my quantum states are equivalent to my death, then I die constantly and am constantly replaced by new people who aren’t me. This has happened many times in the course of writing this comment. If this is already happening anyway, I don’t see any particular reason to avoid having the new person appear instantaneously in my mom’s house, rather than having it appear in an airplane seat an incremental distance closer to my mom’s house.
Other stuff:
Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.
I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.
I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)
I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)
A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.
Other stuff:
“Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn’t matter that the parent cell died at the instant of budding.”
OK good to know. I’ll have other questions but I need to mull it over.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.” I agree with this but I don’t think it supports your line of reasoning. I’ll explain why after my meeting this afternoon.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)” Interesting. I have a contrary line of argument which I’ll explain this afternoon.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)” Disagree. Again I’ll explain why later.
“A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is “no,” since the duplicate isn’t them; they stopped existing just as they desired.” Maybe. If you have destructively scanned them then you have killed them so they now no longer exist so that part you have complied perfectly with their wishes from my point of view. But in order to then make a copy, have you asked their permission? Have they signed a contract saying they have given you the right to make copies? Do they even own this right to make copies? I don’t know.
What I can say is that our differences in opinion here would make a superb science fiction story.
There’s a lot of decent SF on this theme. If you haven’t read John Varley’s Eight Worlds stuff, I recommend it; he has a lot of fun with this. His short stories are better than his novels, IMHO, but harder to find. “Steel Beach” isn’t a bad place to start.
Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn’t really touch much on our points of contention as such. In fact I’d say it steered clear from them since there wasn’t really the concept of uploads etc. Interestingly, I haven’t read anything that really examines closely whether the copied upload really is you. Anyways.
“I would also say that it doesn’t matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren’t identical to the cells that comprised me then.”
OK I have to say that now I’ve thought it through I think this is a straw man argument that “you’re not the same as you were yesterday” used as a pretext for saying that you’re exactly the same from one moment to the next. It is missing the point entirely.
Although you are legally the same person, it’s true that you’re not exactly physically the same person today as you were yesterday and it’s also true that you have almost none of the original physical matter or cells in you today as you had when you were a child.
That this is true in no way negates the main point: human physical existence at any one point in time does have continuity. I have some of the same cells I had up to about seven to ten years ago. I have some inert matter in me from the time I was born AND I have continual memories to a greater or lesser extent. This is directly analogous to my position that I posted before about a slow hybridizing transition to machine form before I had even clearly thought this out consciously.
Building a copy of yourself and then destroying the original has no continuity. It’s directly analgous to budding asexually a new copy of yourself and then imprinting it with your memories and is patently not the same concept as normal human existence. Not even close.
That you and some others might dismiss the differences is fine and if you hypothetically wanted to take the position that killing yourself so that a copy of your mind state could exist indefinitely then I have no problem with that, but it’s patently not the same as the process you, I and everyone else goes through on a day to day basis. It’s a new thing. (Although it’s already been tried in nature as the asexual budding process of bacteria).
I would appreciate, however, that if that is a choice being offered to others, that it is clearly explained to them what is happening. i.e. physical body death and a copy being resurrected, not that they themselves continue living, because they do not. Whether you consider it irrelevant is besides the point. Volition is very important, but I’ll get to that later.
“I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)”
That’s directly analogous to multi worlds interpretation of quantum physics which has multiple timelines. You could argue from that perspective that death is irrelevant because in an infintude of possibilities if one of your instances die then you go on existing. Fine, but it’s not me. I’m mortal and always will be even if some virtual copy of me might not be. So you guessed correctly, unless we’re using some different definition of “person” (which is likely I think) then the person did not survive.
“I agree that volition is important for its own sake, but I don’t understand what volition has to do with what we’ve thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn’t kill the original, then it doesn’t, whether the original wants to die or not. It might be valuable to respect people’s volition, but if so, it’s for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)”
Volition has everything to do with it. While it’s true that volition is independent of whether they have died or not (agreed), the reason it’s important is that some people will likely take your position to justify forced destructive scanning at some point because it’s “less wasteful of resources” or some other pretext.
It’s also particularly important in the case of an AI over which humanity would have no control. If the AI decides that uploads via destructive scanning are exactly the same thing as the original, and it needs the space for it’s purposes then there is nothing to stop it from just going ahead unless volition is considered to be important.
Here’s a question for you: Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
So here’s a scenario for you given that you think information is the only important thing: Do you consider a person who has lost much of their memory to be the same person? What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top. Did they just die? What if it’s someone else’s memories: did they just die?
Here’s yet another scenario. I wonder if you have though about this one: Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using “identical” to mean functionally indentical because we can’t get exactly identical as discussed before). Copy the contents of the mindstates into that clone.
Ask yourself this question: How many deaths have taken place here?
I agree that there is physical continuity from moment to moment in typical human existence, and that there is similar continuity with a slow transition to a nonhuman form. I agree that there is no such continuity with an instantaneous copy-and-destroy operation.
I understand that you consider that difference uniquely important, such that I continue living in the first case, and I don’t continue living in the second case.
I infer that you believe in some uniquely important attribute to my self that is preserved by the first process, and not preserved by the second process.
I agree that if a person is being offered a choice, it is important for that person to understand the choice. I’m perfectly content to describe the choice as between the death of one body and the creation of another, on the one hand, and the continued survival of a single body, on the other. I’m perfectly content not to describe the latter process as the continuation of an existing life.
I endorse individuals getting to make informed choices about their continued life, and their continued existence as people, and the parameters of that existence. I endorse respecting both their stated wishes, and (insofar as possible) their volition, and I acknowledge that these can conflict given imperfect information about the world.
Yes. As I say, I endorse respecting individuals’ stated wishes, and I endorse them getting to make informed choices about their continued existence and the parameters of that existence; involuntary destructive scanning interferes with those things. (So does denying people access to destructive scanning.)
It depends on what ‘much of’ means. If my body continues to live, but my memories and patterns of interaction cease to exist, I have ceased to exist and I’ve left a living body behind. Partial destruction of those memories and patterns is trickier, though; at some point I cease to exist, but it’s hard to say where that point is.
I am content to say I’m the same person now that I was six months ago, so if I am replaced by a backed-up copy of myself from six months ago, I’m content to say that the same person continues to exist (though I have lost potentially valuable experience). That said, I don’t think there’s any real fact of the matter here; it’s not wrong to say that I’m a different person than I was six months ago and that replacing me with my six-month-old memories involves destroying a person.
If I am replaced by a different person’s memories and patterns of interaction, I cease to exist.
Several trillion: each cell in my current body died. I continue to exist. If my clone ever existed, then it has ceased to exist.
Incidentally, I think you’re being a lot more adversarial here than this discussion actually calls for.
Very Good response. I can’t think of anything to disagree with and I don’t think I have anything more to add to the discussion.
My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.
Thanks for the discussion.
What if you were in a situation where you had a near 100% chance of a seemingly successful destructive upload on the one hand, and a 5% chance of survival without upload on the other? Which would you pick, and how does your answer generalize as the 5% goes up or down?
Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.
Here’s a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not): Let’s say you could slowly transfer a person into an upload by the following method: You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining neurons.
Am I dead? Yes but not all of me is and we’re now left with a hybrid being. It’s not completely me, but I’ve not yet been killed by the process and I get to continue to live and think thoughts (even though part of my thoughts are now happening inside something that isn’t me).
Gradually over a process of time (let’s say years rather than days or minutes or seconds) all of the parts of the brain are replaced.
At the end of it I’m still dead, but my memories live on. I did not survive but some part of the hybrid entity I became is alive and I got the chance to be part of that.
Now I know the position you’d take is that speeding that process up is mathematically equivalent.
It isn’t from my perspective. I’m dead instantly and I don’t get the chance to transition my existence in a meaningful way to me.
Sidetracking a little: I suspect you were comparing your unknown quantity X to some kind of “soul”. I don’t believe in souls. I value being alive and having experiencing and being able to think. To me, dying and then being resurrected on the last day by some superbeing who has rebuilt my atoms using other atoms and then copies my information content into some kind of magical “spirit being” is exactly identical to deconstructing me—killing me—and making a copy even if I took the position that the reconstructed being on “the last day” was me. Which I don’t. As soon as I die that’s me gone, regardless of whether some superbeing reconstructs me later using the same or different atoms (if that were possible).
You’re basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of “isn’t it the information that’s important?”
Not exactly.
I’m concerned that I will die and I’m examining the hyptheses as to why it’s not me that dies. Best as I can come up with the response is “you will die but it doesn’t matter because there’s another identical (or close as possible) copy still around.
As to what you value that I don’t I don’t have an answer. Perhaps a way to elicit the answer would be to ask you the question of why you only value the information and not the physical object also?
I’m not asking why you should value yourself over an exact copy, I’m asking why you do. I’m asking you (over and over) what you value. Which is a different question from why you value whatever that is.
I’ve told you what I value, in this context. I don’t know why I value it, particularly… I could tell various narratives, but I’m not sure I endorse any of them.
Is that a typo? What I’ve been trying to elicit is what xxd values here that TheOtherDave doesn’t, not the other way around. But evidently I’ve failed at that… ah well.
Thanks Dave. This has been a very interesting discussion and although I think we can’t close the gap on our positions I’ve really enjoyed it.
To answer your question “what do I value”? I think I answered it already, I valued not being killed.
The difference in our positions appears to be some version “but your information is still around” and my response is “but it’s not me” and your response is “how is it not you?”
I don’t know.
“What is it I value that you don’t?” I don’t know. Maybe I consider myself to be a higher resolution copy or a less lossy copy or something. I can’t put my finger on it because when it comes down to it why do just random quantum states make a difference to me when all the macro information is the same apart from position and perhaps momentum. I don’t really have an answer for that.
But you want the things you think are people to really be people, right?
I’m not sure I care. For example if I had my evil way and I went FOOM then part of my optimization process would involve mind control and somewhat deviant roleplay with certain porno actresses. Would I want those actresses to be controlled against their will? Probably not. But at the same time it would be good enough if they were able to simulate being the actresses in a way that I could not tell the difference between the original and the simulated.
Others may have different opinions.
You wouldn’t prefer to forego the deviant roleplay for the sake of, y’know, not being evil?
But that’s not the point, I suppose. It sounds like you’d take the Experience Machine offer. I don’t really know what to say to that except that it seems like a whacky utility function.
How is the deviant roleplay being evil if the participants are not being coerced or are catgirls? And if it’s not being evil then how would I be defined as evil just because I (sometimes—not always) like deviant roleplay?
That’s the cruz of my point. I don’t reckon that optimizing humanity’s utility function is the opposite of unfriendly AI (or any individual’s for that matter) and I furthermore reckon that trying to seek that goal is much, much harder than trying to create an AI that at a minimum won’t kill us all AND might trade with us if it wants to.
Oh, sorry, I interpreted the comment incorrectly—for some reason I assumed your plan was to replace the actual porn actresses with compliant simulations. I wasn’t saying the deviancy itself was evil. Remember that the AI doesn’t need to negotiate with you—it’s superintelligent and you’re not. And while creating an AI that just ignores us but still optimises other things, well, it’s possible, but I don’t think it would be easier than creating FAI, and it would be pretty pointless—we want the AI to do something, after all.
A-Ha!
Therein lies the crux: you want the AI to do stuff for you.
EDIT: Oh yeah I get you. So it’s by definition evil if I coerce the catgirls by mind control. I suppose logically I can’t have my cake and eat it since I wouldn’t want my own non-sentient simulation controlled by an evil AI either.
So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I’ve already said that elsewhere that I wouldn’t trust my own function to be the optimal.
I doubt, however, that we’d easily find a candidate function from a single individual for similar reasons.
I think we’ve slightly misunderstood each other. I originally thought you were saying that you wanted to destructively upload porn actresses and then remove sentience so they did as they were told—which is obviously evil. But I now realise you only want to make catgirl copies of porn actresses while leaving the originals intact (?) - the moral character of which depends on things like whether you get the consent of the actresses involved.
But yes! Of course I want the AGI to do something. If it doesn’t do anything, it’s not an AI. It’s not possible to write code that does absolutely nothing. And while building AGI might be a fun albeit stupidly dangerous project to pursue just for the heck of it, the main motivator behind wanting the thing to be created (speaking for myself) is so that it can solve problems, like, say, death and scarcity.
Technically, it’s still an AI, it’s just a really useless one.
Exactly.
So “friendly” is therefore a conflation of NOT(unfriendly) AND useful rather than just simply NOT(unfriendly) which is easier.
Off. Do I win?
You’re determined to make me say LOL so you can downvote me right?
EDIT: Yes you win. OFF.
Correct. I (unlike some others) don’t hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind. Non destructively scanning them and them using the simulated versions for “evil purposes”, however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.
And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
Why is it recursively self-improving if it isn’t doing anything? If my end goal was not to do anything, I certainly don’t need to modify myself in order to achieve that better than I could achieve it now.
Isn’t doing anything for us…
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
OK give me time to digest the jargon.
Newsflash the human body is a machine too! I’m being deliberately antagonist here, it’s so obvious that a human (body and mind are the same thing) is a machine, that it’s irrelevant to even mention it.
Song
lyrics
story
article—really much more a discussion than a lesson.
I would say that they both cease to be you, just as the current, singular “you” ceases to be that specific “you” the instant you see some new sight or think some new thought.
Agreed, though I would put something like, “if a person diverged into two separate versions who then became two separate people, then one version shouldn’t be blamed for the crimes of the other version”.
On a separate note, I’m rather surprised to hear that you prefer consequentialist morality to deontological morality; I was under the impression that most Christians followed the Divine Command model, but it looks like I was wrong.
I mean something like, “whatever it is that causes you to believe in in God, the afterlife, and the Bible in the first place”, but point taken.
Ooh, I see, I totally misunderstood what you meant. By feel, you mean “experience feelings”, thus something akin to qualia, right ? But in this case, your next statement is problematic:
In this case, wouldn’t it make sense to conclude that mind uploading is a perfectly reasonable procedure for anyone (possibly other than yourself) to undergo ? Imagine that Less Wrong was a community where mind uploading was common. Thus, at any given point, you could be talking to a mix of uploaded minds and biological humans; but you’d strive to treat them all the same way, as human, since you don’t know which is which (and it’s considered extremely rude to ask).
This makes sense to me, but this would seem to contradict your earlier statement that you could, in fact, detect whether any particular entity had a soul (by asking God), in which case it might make sense for you to treat soulless people differently regardless of what they acted like.
On the other hand, if you’re willing to treat all people the same way, even if their ensoulment status is in doubt, then why would you not treat yourself the same way, regardless of whether you were using a biological body or an electronic one ?
Good point. I should point out that some people do trust select individuals to do just that, and many more people trust psychiatrists and neurosurgeons enough to give them at least some control over their minds and brains. That said, the hypothetical technician in charge of uploading your mind would have much greater degree of access than any modern doctor, so your objection makes sense. I personally would likely undergo the procedure anyway, assuming the technician had some way of proving that he has a good track record, but it’s possible I’m just being uncommonly brave (or, more likely, uncommonly foolish).
Haha yes, that’s a good point, you should probably stick to saying things that are actually relevant to the topic, otherwise we’d never get anywhere :-)
FWIW, this is one of the main goals of transhumanists, if I understand them correctly: to be able to experience the world much more fully than their current bodies would allow.
Oh, I agree (well, except for that whole soul thing, obviously). As I said before, I don’t believe that anything like full mental uploading, not to mention the Singularity, will occur during my lifetime; and I’m not entirely convinced that such things are possible (the Singularity seems especially unlikely). Still, it’s an interesting intellectual exercise.
I typed up a response to this. It wasn’t a great one, but it was okay. Then I hit the wrong button and lost it and I’m not in the mood to write it over again because I woke up early this morning to get fresh milk. (By “fresh” I mean “under a minute from the cow to me”, if you’re wondering why I can’t go shopping at reasonable hours.) It turns out that four hours of sleep will leave you too tired to argue the same point twice.
That said,
Deciding whether or not to get uploaded is a choice I make trying to minimize the risk of dying by accident or creating multiple copies of me. Reacting to other people is a choice I make trying to minimize the risk of accidentally being cruel to someone. No need to act needlessly cruel anyway. Plus it’s good practice, since our justice system won’t decide personhood by asking God...
Upvoted in empathy for the feeling of losing a large, well-written comment; and soldiering on to extract at least one relevant point from memory.
In recognition of your effort, I looked up the joke you couldn’t find.
That sounds ecolicious to a city-slicker such as myself, but all right :-)
Fair enough, though I would say that if we assume that souls do not exist, then creating copies is not a problem (other than that it might be a drain on resources, etc.), and uploading may actually dramatically decrease your risk of dying. But if we assume that souls do exist, then your objections are perfectly reasonable.
That makes sense, but couldn’t you ask God somehow whether the person you’re talking to has a soul or not, and then act accordingly ? Earlier you indicated that you could do this, but it’s possible I misunderstood.
I apologize; earlier I deliberately glossed over a complicated thought process just to give the conclusion that maybe you could know, as opposed to explaining in full.
God has been known to speak to people through dreams, visions and gut feelings. That doesn’t mean God always answers when I ask questions, which probably has something to do with the weakness of my faith. You could ask and you could try to listen, and if God is willing to answer, and if you don’t ignore obvious evidence due to your own biases*, you could get an answer. But God has for whatever reason chosen to be rather taciturn (I can only think of one person I know who’s been sent a vision from God), so you also might not, and God might speak to one person about it but not everyone, leaving others to wonder if they can trust people’s claims, or to study the Bible and other relevant information to try to figure it out for themselves. And then there are people who just get stuff wrong and won’t listen, but insist they’re right, and insist God agrees with them, confusing anyone God hasn’t spoken to. Hence, if you receive an answer and listen (something that’s happened to me, but not nearly every time I ask a question—at least, not unless we count finding the answer after asking through running into it in a book or something), you’ll know, but there’s also a possibility of just not finding out.
*There’s a joke I can’t find about some Talmudic scholars who are arguing. They ask God, a voice booms out from the heavens which one is right, and the others fail to update.
But schizophrenics have been known to experience those things too. How do you tell the difference—even if you’re the one it’s happening to?
I had to confront that one. Upvoted for being an objection a reasonable person should make.
Be familiar with how mental illnesses and other disorders that can affect thinking actually present. (Not just the DSM. Read what people with those conditions say about them.)
Be familiar with what messages from God are supposed to be like. (From Old Testament examples or Paul’s heuristic. I suppose it’s also reasonable to ascertain whether or not they fit the pattern for some other religion.)
Essentially, look at what your experiences best fit. That can be hard. But if your “visions” are highly disturbing and you become paranoid about your neighbors trying to kill you, it’s more likely schizophrenia than divine inspiration. This applies to other things as well.
Does it actually make sense? Is it a message saying something, and then another one of the same sort, proclaiming the opposite, so that to believe one requires disbelieving the other?
Is there anything you can do to increase the probability that you’re mentally healthy? Is your thyroid okay? How are your adrenals? Either could get sick in a way that mimics a mood disorder. Can you also consider whether your lifestyle’s not conducive to mental health? Sleep problems? Poor nutrition?
Run it by other people who know you well and would be people you would trust to know if you were mentally ill.
No certainties. Just ways to be a little more sure. And that leads into the next one.
Pick the most likely interpretation and go with it and see if your quality of life improves. See if you’re becoming a better person.
“The angel of the Lord appeareth to Joseph in a dream, saying, Arise, and take the young child and his mother, and flee into Egypt, and be thou there until I bring thee word: for Herod will seek the young child to destroy him. When he arose, he took the young child and his mother by night, and departed into Egypt.”
I work in a psych hospital, and the delusional patients there uniformly believe that their delusions make sense.
This is the most likely to work. The delusional people I know are aware that other people disagree with their delusions. Relatedly, there is great disagreement on the topic of religion.
Good point. Of course, this one does make a testable prediction, and as opposed to what might be more characteristic of a mental illness, the angel tells him there’s trouble, he avoids it and we have no further evidence of his getting any more such messages. That at least makes schizophrenia a much less likely explanation than just having a weird dream, so that’s what to try ruling out.
I have to admit that I’m not familiar with Paul’s heuristic—what is it ?
As for the Old Testament, God gives out some pretty frightening messages in there, from “sacrifice your son to me” to “wipe out every man, woman, and child who lives in this general area”. I am reasonably sure you wouldn’t listen to a message like that, but why wouldn’t you ?
I have heard this sentiment from other theists, but I still understand it rather poorly, I’m ashamed to admit… maybe it’s because I’ve never been religious, and thus I’m missing some context.
So, what do you mean by “a better person”; how do you judge what is “better” ? In addition, let’s imagine that you discovered that believing in, say, Buddhism made you an even better person. Would you listen to messages that appear to be Buddhist, and discard those that appear to be Christian but contradict Buddhism—even though you’re pretty sure that Christianity is right and Buddhism is wrong ?
I think I might be too tired to give this the response it deserves. If this post isn’t a good enough answer, ask me again in the morning.
That you can tell whether a spirit is good or evil by whether or not it says Jesus is Lord.
Well, right here I mean that if you’ve narrowed it down to either schizophrenia or Christianity is true and God is speaking to you, if it’s the former, untreated, you expect to feel more miserable. If it’s the latter, by embracing God, you expect it’ll make your quality of life improve. “Better person” here means “person who maximizes average utility better”.
Oh, I see, and the idea here is that the evil spirit would not be able to actually say “Jesus is Lord” without self-destructing, right ? Thanks, I get it now; but wouldn’t this heuristic merely help you to determine whether the message is coming from a good spirit or an evil one, not whether the message is coming from a spirit or from inside your own head ?
I haven’t studied schizophrenia in any detail, but wouldn’t a person suffering from it also have a skewed subjective perception of what “being miserable” is ?
Some atheists claim that their life was greatly improved after their deconversion from Christianity, and some former Christians report the same thing after converting to Islam. Does this mean that the Christian God didn’t really talk to them while they were religious, after all—or am I overanalyzing your last bullet point ?
Understood, though I was confused for a moment there. When other people say “better person”, they usually mean something like “a person who is more helpful and kinder to others”, not merely “a happier person”, though obviously those categories do overlap.
I just lost my comment by hitting the wrong button. Not being too tired today, though, here’s what I think in new words:
Yes. That’s why we have to look into all sorts of possibilities.
Speaking here only as a layperson who’s done a lot of research, I can’t think of any indication of that. Rather, they tend to be pretty miserable if their psychosis is out of control (with occasional exceptions). One person’s biography that I read recounts having it mistaken for depression at first, and believing that herself since it fit. That said, conventional approaches to treating schizophrenia don’t help much/any with half of it, the half that most impairs quality of life. (Not that psychosis doesn’t, but as a quick explanation, they also suffer from the “negative symptoms” which include stuff like apathy, poor grooming and stuff. The “positive symptoms” are stuff like hearing voices and being delusional. In the rare* cases where medication works, it only treats positive symptoms and usually exacerbates negative symptoms. (Just run down a list of side-effects and a list of negative symptoms. It helps if you know jargon.) Hence, poor quality of life.) So it’s also possible that receiving treatment for a mental illness you actually have would fail to increase quality of life. Add in abuses by the system and it could even decrease it, so this is definitely a problem.
Aris understood correctly.
*About a third of schizophrenics are helped by medication. Not rare, certainly, but that’s less than half. Guidelines for treating schizophrenia are irrational. I will elaborate if asked, with the caveat that it’s irrelevant and I’m not a doctor.
And I left stuff out here that was in the first.
Short version: unsurprising because of things like this. People can identify as Christian while being confused about what that means.
Surprising. My model takes a hit here. Do you have links to firsthand accounts of this?
I’m surprised by your surprise.
I generally expect that people who make an effort to be X will subsequently report that being X improves their life, whether we’re talking about “convert to Christianity” or “convert to Islam” or “deconvert from Christianity” or “deconvert from Islam.”
Interesting—the flip side is “the grass is always greener.” I am not at all surprised that other effects dominate sometimes, or even a good deal of the time, however.
Can you clarify? Is it your claim that these “confused” Christians are the only ones who experience improved lives upon deconversion? Or did you mean something else?
I’m saying people can believe that they are Christians, go to church, pray, believe in the existence of God and still be wrong about fundamental points of doctrine like “I require mercy, not sacrifice” or the two most important commands, leading to people who think being Christian means they should hate certain people. There are also people who conflate tradition and divine command, leading to groups that believe being Christian means following specific rules which are impractical in modern culture and not beneficial. I expect anyone like that to have an improved quality of life after they stop hating people and doing pointless things. I expect a quality of life even better than that if they stop doing the bad stuff but really study the Bible and be good people, with the caveat that quality of life for those people could be lowered by persecution in some times and places. (They could also end up persecuted for rejecting it entirely in other times and places. Or even the same ones.)
Basically, yeah, only if they’ve done something wrong in their interpretation of Scripture will they like being atheists better than being Christians.
My brain is interpreting that as “well, TRUE Christians wouldn’t be happier/better if they deconverted.” How is this not “No True Scotsman”?
Would you say you are some variety of Calvinist? I’m guessing not, since you don’t sound quite emphatic enough on this point. (For the Calvinist, it’s point of doctrine that no one can cease being a Christian—they must not have been elect in the first place. I expect you already know this, I’m saying it for the benefit of any following the conversation who are lucky enough to not have heard of Calvinism. Also, lots of fundamentalist leaning groups (e.g., Baptists) have a “once saved always saved” doctrine.)
I hope I’m not coming off confrontational; I had someone IRL tell me I must never have been a real christian not too long ago, and I found it very annoying—so I may be being a bit overly sensitive.
Explained here. Tell me if that’s not clear.
Um… not exactly?
I was familiar with the concept, but not its name.
You’re not, but I live by Crocker’s Rules anyway.
Could you elaborate on this point a bit ? As far as I understand, at least some of the positive symptoms may pose significant existential risks to the patient (and possibly those around him, depending on severity). For example, a person may see a car coming straight at him, and desperately try to dodge it, when in reality there’s no car. Or a person may fail to notice a car that actually exists. Or, in extreme cases, the person may believe that his neighbour is trying to kill him, take preemptive action, and murder an innocent. If I had symptoms like that, I personally would rather live with the negatives for the rest of my life, rather than living with the vastly increased risk that I might accidentally kill myself or harm others—even knowing that I might feel subjectively happier until that happens.
Ok, that makes sense: by “becoming a better person”, you don’t just mean “a happier person”, but also “a person who’s more helpful and nicer to others”; and you choose to believe things that make you such a person.
I have to admit, this mode of thought is rather alien to me, and thus I have a tough time understanding it. To me, this sounds perilously close to wishful thinking. To use an exaggerated example, I would definitely feel happier if I knew that I had a million dollars in the bank. Having a million dollars would also empower me to be a better person, since I could donate at least some of it to charity, or invest it in a school, etc. However, I am not going to go ahead and believe that I have a million dollars, because… well… I don’t.
In addition, there’s a question of what one sees as being “better”. As we’d talked about earlier, at least some theists do honestly believe that persecuting gay people and forcing women to wear burqas is a good thing to do (and a moral imperative). Thus, they will (presumably) interpret any gut feelings that prompt them to enforce the burqa ordinances even harder as being good and therefore godly and true. You (and I), however, would do just the opposite. So, we both use the same method but arrive at diametrically opposed conclusions; doesn’t this mean that the method may be flawed ?
My main objection to this line of reasoning is that it involves the “No True Scotsman” fallacy. Who is to say (other than the Pope, perhaps) what being a Christian “really means” ? The more conservative Christians believe that feminism is a sin, whereas you do not; but how would you convince an impartial observer that you are right and they are wrong ? You could say, “clearly such attitudes harm women, and we shouldn’t be hurting people”, but they’d just retort with, “yes, and incarcerating criminals harms the criminals to, but it must be done for the greater good, because that’s what God wants; He told me so”.
In addition, it is not the case that all people who leave Christianity (be it for another religion, or for no religion at all) come from such extreme sects as the one you linked to. For example, Julia Sweeny (*), a prominent atheist, came from a relatively moderate background, IIRC. More on this below:
I don’t have any specific links right now (I will try to find some later), but apparently there is a whole website dedicated to the subject. Wikipedia also has a list. I personally know at least two people who converted from relatively moderate versions of Christianity to Wicca and Neo-Paganism, and report being much happier as the result, though obviously this is just anecdotal information and not hard data. In general, though, my impression was that religious conversions are relatively common, though I haven’t done any hard research on the topic. There’s an interesting-looking paper on the topic that I don’t have access to… maybe someone else here does ?
(*) I just happened to remember her name off the top of my head, because her comedy routine is really funny.
Yeah. You could feel unhappy a lot more if you take the pills usually prescribed to schizophrenics because side-effects of those pills include mental fog and weight gain. You could also be a less helpful person to others because you would be less able to do thinks if you’re on a high enough dose to “zombify” you. Also, Erving Goffman’s work shows that situations where people are in an institution, as he defines the term, cause people to become stupider and less capable. (Kudos to the mental health system for trying to get people out of those places faster—most people who go in get out after a little while now, as opposed to the months it usually took when he was studying. However, the problems aren’t eliminated and his research is still applicable.) Hence, it could make you a worse and unhappier person to undergo treatment.
NO. That takes a BIG NO. Severity of mental illness is NOT correlated with violence. It’s correlated with self-harm, but not hurting other people.
Mental illness is correlated (no surprise here) with being abused and with substance abuse. Both of those are correlated with violence, leading to higher rates of violence among the mentally ill. Even when not corrected for, the rate isn’t that high and the mentally ill are more likely to be victims of violent crime than perpetrators of it. But when those effects ARE corrected for, mental illness does not, by itself, cause violence.
At all. End of story. Axe-crazy villains in the movies are unrealistic and offensive portrayals of mental illness. /rant
This mode of thought is alien to me too, since I wasn’t advocating it. I’m confused about how you could come to that conclusion. I have been unclear, it seems.
(Seriously, what?)
Okay, so I mean, if you think you only want to fulfill your own selfish desires, and then become a Christian, and even though you don’t want to, decide it’s right to be nice to other people and spend time praying, and then after a while learn that it makes you really happy to be nice and happier than you’ve ever been before to pray. That’s what I meant.
Yes. It’s only to be used as an adjunct to thinking things through, not the end-all-be-all of your strategy for deciding what to do in life.
My argument isn’t against people who think feminism is sinful (would you like links to sane, godly people espousing the idea without being hateful?) but with the general tenor of the piece. See below.
Well, not the Pope, certainly. He’s a Catholic. But I thought a workable definition of “Christian” was “person who believes in the divinity of Jesus Christ and tries to follow his teachings”, in which case we have a pretty objective test. Jesus taught us to love our neighbors and be merciful. He repeatedly behaved politely toward women of poor morals, converting them with love and specifically avoiding condemnation. Hence, people who are hateful or condemn others are not following his teachings. If that was a mistake, that’s different, just like a rationalist could be overconfident—but to systematically do it and espouse the idea that you should be hateful clearly goes against what Jesus taught as recorded in the Bible. Here’s a quote from the link:
Compare it with a relevant quote from the Bible, which has been placed in different places in different versions, but the NIVUK (New International Version UK) puts it at the beginning of John 8:
So, it’s not unreasonable to conclude that, whether or not Christianity is correct and whether or not it’s right to lock people up for wearing miniskirts, that attitude is unChristian.
Thank you! I’ll look that over.
I seem to be collecting downvotes, so I’ll shut up about this shortly. But to me, anyway, this still sounds like No True Scotsman. I suspect that nearly all Christians will agree with your definition (excepting Mormons and JW’s, but I assume you added “divinity” in there to intentionally exclude them). However, I seriously doubt many of them will agree with your adjudication. Fundamentalists sincerely believe that the things they do are loving and following the teachings of Jesus. They think you are the one putting the emphasis on the wrong passages. I personally happen to think you probably are much more correct than they are; but the point is neither one of us gets to do the adjudication.
I think this is missing the point: they believe that, but they’re wrong. The fact that they’re wrong is what causes them distress. If you’d like, we can taboo the word “Christian” (or just end the conversation, as you suggest).
.
I have never before had someone disagree with me on the grounds that I’m both morally superior to other people and a genius.
.
I wouldn’t go disagreeing with him; I’d try performing a double-blind test of his athletic ability while wearing different pairs of socks. It just seems like the sort of thing that’s so simple to design and test that I don’t know if I could resist. I’d need three people and a stopwatch...
Don’t forget the spare pairs of socks!
Yes, thanks for reminding me. I’d also need pencil and paper.
And a nontrivial amount of time and attention.
I suspect that after the third or fifth such athlete, you’d develop the ability to resist, and simply have your opinion about his or her belief about socks, which you might or might not share depending on the circumstances.
.
Uh-oh, that’s a bad sign. If someone on LessWrong thinks something like that, I’d better give it credence. But now I’m confused because I can’t think what has given you that idea. Ergo, there appears to be evidence that I’ve not only made a mistake in thinking, but made one unknowingly, and failed to realize afterward or even see that something was wrong.
So, this gives me two questions and I feel like an idiot for asking them, and if this site had heretofore been behaving like other internet sites this would be the point where the name-calling would start, but you guys seem more willing than average to help people straighten things out when they’re confused, so I’m actually going to bother asking:
What do you mean by “basic premise” and “can’t question” in this context? Do you mean that I can’t consider his nonexistence as a counterfactual? Or is there a logical impossibility in my conception of God that I’ve failed to notice?
Can I have specific quotes, or at least a general description, of when I’ve been evasive? Since I’m unaware of it, it’s probably a really bad thinking mistake, not actual evasiveness—that or I have a very inaccurate self-concept.
Actually, no possibility seems good here (in the sense that I should revise my estimate of my own intelligence and/or honesty and/or self-awareness down in almost every case), except that something I said yesterday while in need of more sleep came out really wrong. Or that someone else made a mistake, but given that I’ve gotten several downvotes (over seventeen, I think) in the last couple of hours, that’s either the work of someone determined to downvote everything I say or evidence that multiple people think I’m being stupid.
(You know, I do want to point out that the comment about testing his lucky socks was mostly a joke. I do assign a really low prior probability to the existence of lucky socks anywhere, in case someone voted me down for being an idiot instead of for missing the point and derailing the analogy. But testing it really is what I would do in real life if given the chance.)
This isn’t a general objection to my religion, is it? (I’m guessing no, but I want to make sure.)
.
Not how I would have put that, but mostly ADBOC this. (I wouldn’t have called him a man, nor would I have singled out the sky as a place to put him. But yes, I do believe in a god who created everything and loves all, and ADBOC the bit about the 12-year-old—would you like to get into the Problem of Evil or just agree to disagree on the implied point even though that’s a Bayesian abomination? And agree with the last sentence.)
I’d ask you what would look different if I did, but I think you’ve answered this below.
You think I’m one of those people. Let me begin by saying that God’s existence is an empirical fact which one could either prove or disprove.
I worry about telling people why I converted because I fear ridicule or accusations of lying. However, I’ll tell you this much: I suddenly became capable of feeling two new sensations, neither of which I’d felt before and neither of which, so far as I know, has words in English to describe it. Sensation A felt like there was something on my skin, like dirt or mud, and something squeezing my heart, and was sometimes accompanied by a strange scent and almost always by feelings of distress. Sensation B never co-occurred with Sensation A. I could be feeling one, the other or neither, and could feel them to varying degrees. Sensation B felt relaxing, but also very happy and content and jubilant in a way and to a degree I’d never quite been before, and a little like there was a spring of water inside me, and like the water was gold-colored, and like this was all I really wanted forever, and a bit like love. After becoming able to feel these sensations, I felt them in certain situations and not in others. If one assumed that Sensation A was Bad and Sensation B was Good, then they were consistent with Christianity being true. Sometimes they didn’t surprise me. Sometimes they did—I could get the feeling that something was Bad even if I hadn’t thought so (had even been interested in doing it) and then later learn that Christian doctrine considered it Bad as well.
I do not think a universe without God would look the same. I can’t see any reason why a universe without God would behave as if it had an innate morality that seems, possibly, somewhat arbitrary. I would expect a universe without God to work just like I thought it did when I was an atheist. I would expect there to be nothing wrong (no signal saying Bad) with… well, anything, really. A universe without God has no innate morality. The only thing that could make morality would be human preference, which changes an awful lot. And I certainly wouldn’t expect to get a Good signal on the Bible but a Bad signal on other holy books.
So. That’s the better part of my evidence, such as it is.
This would be considerably more convincing if Christianity were a unified movement.
Suppose there existed only three religions in the world, all of which had a unified dogma and only one interpretation of it. Each of them had a long list of pretty specific doctrinal points, like one religion considering Tarot cards bad and another thinking that they were fine. If your Good and Bad sensations happened to precisely correspond to the recommendations of one particular religion, even in the cases where you didn’t actually know what the recommendations were beforehand, then that would be some evidence for the religion being true.
However, in practice there are a lot of religions, and a lot of different Christian sects and interpretations. You’ve said that you’ve chosen certain interpretations instead of others because that’s the interpretation that your sensations favored. Consider now that even if your sensations were just a quirk of your brain and mostly random, there are just so many different Christian sects and varying interpretations that it would be hard not to find some sect or interpretation of Christian doctrine who happened to prescribe the same things as your sensations do.
Then you need to additionally take into account ordinary cognitive flaws like confirmation bias: once you begin to believe in the hypothesis that your sensations reflect Christianity’s teachings, you’re likely to take relatively neutral passages and read into them doctrinal support for your position, and ignore passages which say contrary things.
In fact, if I’ve read you correctly, you’ve explicitly said that you choose the correct interpretation of Biblical passages based on your sensations, and the Biblical passages which are correct are the ones that give you a Good feeling. But you can’t then say that Christianity is true because it’s the Christian bits that give you the good feeling—you’ve defined “Christian doctrine” as “the bits that give a good feeling”, so “the bits that give a good feeling” can’t not be “Christian doctrine”!
Furthermore, our subconscious models are often accurate but badly understood by our conscious minds. For many skills, we’re able to say what’s the right or wrong way of doing something, but be completely unable to verbalize the reason. Likewise, you probably have a better subconscious model of what would be “typical” Christian dogma than you are consciously aware of. It is not implausible that you’d have a subconscious process making guesses on what would be a typical Christian response to something, giving you good or bad sensation based on that, and often guessing right (especially since, as noted before, there’s quite a lot of leeway in how a “Christian response” is defined).
For instance, you say that you hadn’t thought of Tarot cards being Bad before. But the traditional image of Christianity is that of being strongly opposed to witchcraft, and Tarot cards are used for divination, which is strongly related to witchcraft. Even if you hadn’t consciously made that connection, it’s obvious enough that your subconscious very well could have.
I don’t think the conclusion that the morality described by sensations A/B is a property of the universe at large has been justified. You mention that the sensations predict in advance what Christian doctrine describes as moral or immoral before you know directly what that doctrine says, but that strikes me as being an investigation method that is not useful, for two reasons:
Christian culture is is very heavily permeated throughout most English-speaking cultures. A person who grows up in such a culture will have a high likelihood of correctly guessing Christianity’s opinion on any given moral question, even if they haven’t personally read the relevant text.
More generally, introspection is a very problematic way of gathering data. Many many biases, both obvious and subtle, come into play, and make your job way more difficult. For example: Did you take notes on each instance of feeling A or B when it occurred, and use those notes (and only those notes) later when validating them against Christian doctrine? If not, you are much more likely to remember hits than misses, or even to after-the-fact readjust misses into hits; human memory is notorious for such things.
In a world entirely without morality, we are constantly facing situations where trusting another person would be mutually beneficial, but trusting when the other person betrays is much worse than mutual betrayal. Decision theory has a name for this type of problem: Prisoner’s Dilemma. The rational strategy is to defect, which makes a pretty terrible world.
But when playing an indefinite number of games, it turns out that cooperating, then punishing defection is a strong strategy in an environment of many distinct strategies. That looks a lot like “turn the other cheek” combined with a little bit of “eye for an eye.” Doesn’t the real world behavior consistent with that strategy vaguely resemble morality?
In short, decision theory suggests that material considerations can justify a substantial amount of “moral” behavior.
Regarding your sensations A and B, from the outside perspective it seems like you’ve been awfully lucky that your sense of right and wrong match your religious commitments. If you believed Westboro Baptist doctrine but still felt sensations A and B at the same times you feel them now, then you’d being doing sensation A behavior substantially more frequently. In other words, I could posit that you have a built-in morality oracle, but why should I believe that the oracle should be labelled Christian? If I had the same moral sensations you do, why shouldn’t I call it rationalist morality?
I would say tit-for-tat looks very much like “eye for an eye” but very little like “turn the other cheek”, which seems much more like a cooperatebot.
it’s turn the other cheek in the sense that you immediately forgive as soon as you figure out that your partner is willing to cooperate
But that’s also true with eye for an eye—one defection merits one defection; it’s not “two eyes for an eye”.
Fair enough. Usually, the sort of people who say “eye for eye” mean something closer to to “bag or rice for your entire life”, tho.
Edit: Calibration and all that, you know?
...I became a Christian and determined my religious beliefs based on sensations A and B. Why would I believe in unsupported doctrine that went against what I could determine of the world? I just can’t see myself doing that. My sense of right and wrong match my religious commitments because I chose my religious commitments so they would fit with my sense of right and wrong.
Because my built-in morality oracle likes the Christian Bible.
It’s sufficient to explain some, but not all, morality. Take tarot cards, for example. What was there in the ancestral environment to make those harmful? That just doesn’t make any sense with your theory of morality-as-iterated-Prisoner’s-Dilemma.
If you picked a sect based on your moral beliefs, then that is evidence that your Christianity is moral. It is not evidence that morality is your Christianity (i.e. “A implies B” is not equivalent “B implies A”).
And if playing with tarot cards could open a doorway for demons to enter the world (or whatever wrong they cause), it seems perfectly rational to morally condemn tarot cards. I don’t morally condemn tarot cards because I think they have the same mystical powers as regular playing cards (i.e. none). Also, I’m not intending to invoke “ancestral environment” when I invoke decision theory.
But that’s already conditional on a universe that looks different from what most atheists would say exists. If you see proof that tarot cards—or anything else—summon demons, your model of reality takes a hit.
I don’t understand. Can you clarify?
If tarot cards have mystical powers, I absolutely need to adjust my beliefs about the supernatural. But you seemed to assert that decision theory can’t say that tarot are immoral in the universes where they are actually dangerous.
Alice has a moral belief that divorce is immoral. This moral belief is supported by objective evidence. She is given a choice to live in Distopia, where divorce is permissible by law, and Utopia, where divorce is legally impossible. For the most part, Distopia and Utopia are very similar places to live. Predictably, Alice chooses to live in Utopia. The consistency between Alice’s (objectively true) morality and Utopian law is evidence that Utopia is moral. It is not evidence that Utopia is the cause of Alice’s morality (i.e. is not evidence that morality is Utopian—the grammatical ordering of phrases does not help making my point).
Oh, I’m sorry. Yes, that does make sense. Decision theory WOULD assert it, but to believe they’re immoral requires belief in some amount of supernatural something, right? Hence it makes no sense under what my prior assumptions were (namely, that there was nothing supernatural).
Oh, now I understand. That makes sense.
Accepting the existence of the demon portal should not impact your disbelief in a supernatural morality.
Anyways, the demons don’t even have to be supernatural. First hypothesis would be hallucination, second would be aliens.
I don’t see that decision theory cares why an activity is dangerous. Decision theory seems quite capable of imposing disincentives for poisoning (chemical danger) and cursing (supernatural danger) in proportion to their dangerousness and without regard to why they are dangerous.
The whole reason I’m invoking decision theory is to suggest that supernatural morality is not necessary to explain a substantial amount of human “moral” behavior.
You were not entirely clear, but you seem to be taking these as signals of things being Bad or Good in the morality sense, right? Ok so it feels like there is an objective morality. Let’s come up with hypotheses:
You have a morality that is the thousand shards of desire left over by an alien god. Things that were a good idea (for game theory, etc reasons) to avoid in the ancestral environment tend to feel good so that you would do them. Things that feel bad are things you would have wanted to avoid. As we know, an objective morality is what a personal morality feels like from the inside. That is, you are feeling the totally natural feelings of morality that we all feel. Why you attached special affect to the bible, I suppose that’s the affect hueristic: you feel like the bible is true and it is the center of your belief or something, and that goodness gets confused with a moral goodness. This is all hindsight, but it seems pretty sound.
Or it could be Jesus-is-Son-of-a-Benevolent-Love-Agent-That-Created-the-Universe. I guess God is sending you signals to say what sort of things he likes/doesn’t like? Is that the proposed mechanism for morality? I don’t know enough about the theory to say much more.
Ok now let’s consider the prior. The complex loving god hypothesis is incredibly complicated. Minds are so complex we can’t even build one yet. It would take a hell of a lot more than your feeling-of-morality evidence to even raise this to our attention. A lot more than any scientific hypothesis has ever collected, I would say. You must have other evidence, not only to overcome the prior, but all the evidence against a loving god who intelligently arranged anything,
Anyways, It sounds like you were primarily a moral nihilist before your encounter with the god-prescribes-a-morality hypothesis. Have you read Eliezers metaethics stuff? it deals the with subject of morality in a neutral universe quite well.
I’m afraid I don’t see why you call your reward-signal-from-god is an “objective morality” It sounds like the best course of action would be to learn the mechanism and seize control of it like AIXI would.
I (as a human) already have a strong morality, so if I figured out that the agent responsible for all of the evil in the universe were directly attempting to steer me with a subtle reward signal, I’d be pissed. It’s interesting that you didn’t have that reaction. I guess that’s the moral nihilism thing. You didn’t know you had your own morality.
There are two problems with this argument. First, each individual god might be very improbable, but that could be counterbalanced by the astronomical number of possible gods (e.g. consider all possible tweaks to the holy book), so you can argue apriori against specific flavors of theism but not against theism in general. Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity. A powerful mind (or a program that blossoms into one) could even be simpler than physics as we currently know it, which is already quite complex and seems to have even more complexity waiting in store.
IMO a correct argument against theism should focus on the “loving” part rather than the “mind” part, and focus on evidence rather than complexity priors. The observed moral neutrality of physics is more probable if there’s no moral deity. Given what we know about evolution etc., it’s hard to name any true fact that makes a moral deity more likely.
I’m not sure that everything in my comment is correct. But I guess LW could benefit from developing an updated argument against (or for) theism?
Your argument about K-complexity is a decent shorthand but causes people to think that this “simplicity” thing is baked into the universe (universal prior) as if we had direct access to the universe (universal prior, reference machine language) and isn’t just another way of saying it’s more probable after having updated on a ton of evidence. As you said it should be about evidence not priors. No one’s ever seen a prior, at best a brain’s frequentist judgment about what “priors” are good to use when.
That may be somewhat misleading. A seed AI, denied access to external information, will be a moron. Yet the more information it takes into memory the higher the K-complexity of the thing, taken as a whole, is.
You might be able to code a relatively simple AI in your garage, but if it’s going to be useful it can’t stay simple.
ETA: Also if you take the computer system as a whole with all of the programming libraries and hardware arrangements—even ‘hello world’ would have high K-complexity. If you’re talking about whatsoever produces a given output on the screen in terms of a probability mass I’m not sure it’s reasonable to separate the two out and deal with K-complexity as simply a manifestation of high level APIs.
Relevant LW post.
For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised;
max(morality)
is simpler thangod(morality)
almost no matter how simple god is. Assuming a physics god is even a coherent concept.In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
What did I miss?
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I take it you don’t think we have a chance of creating a superpowerful AI with our own morality?
We don’t have to be very intelligent to be a threat if we can create something that is.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
Bayesians don’t believe in evidence silly goose, you know that. Anyway, User:cousin_it, you’re essentially right, though I think that LW would benefit less from developing updated arguments and more from reading Aquinas, at least in the counterfactual universe where LW knew how to read. Anyway. In the real world Less Wrong is hopeless. You’re not hopeless. As a decision theorist you’re trying to find God, so you have to believe in him in a sense, right? And if you’re not trying to find God you should probably stay the hell away from FAI projects. Just sayin’.
A really intelligent response, so I upvoted you, even though, as I said, it surprised me by telling me that, just as one example, tarot cards are Bad when I had not even considered the possibility, so I doubt this came from inside me.
Well you are obviously not able to predict the output of your own brain, that’s the whole point of the brain. If morality is in the brain and still too complex to understand, you would expect to encounter moral feelings that you had not anticipated.
Er, I thought it was overall pretty lame, e.g. the whole question-begging w.r.t. the ‘prior probability of omnibenevolent omnipowerful thingy’ thingy (nothing annoys me more than abuses of probability theory these days, especially abuses of algorithmic probability theory). Perhaps you are conceding too much in order to appear reasonable. Jesus wasn’t very polite.
By the way, in case you’re not overly familiar with the heuristics and biases literature, let me give you a hint: it sucks. At least the results that most folk around her cite have basically nothing to do with rationality. There’s some quite good stuff with tons of citations, e.g. Gigerenzer’s, but Eliezer barely mentioned it to Less Wrong (as fastandfrugal.com which he endorsed) and therefore as expected Less Wrong doesn’t know about it. (Same with interpretations of quantum mechanics, as Mitchell Porter often points out. I really hope that Eliezer is pulling some elaborate prank on humanity. Maybe he’s doing it unwittingly.)
Anyway the upshot is that when people tell you about ‘confirmation bias’ as if it existed in the sense they think it does then they probably don’t know what the hell they’re talking about and you should ignore them. At the very least don’t believe them until you’ve investigated the literature yourself. I did so and was shocked at how downright anti-informative the field is, and less shocked but still shocked at how incredibly useless statistics is (both Bayesianism as a theoretical normative measure and frequentism as a practical toolset for knowledge acquisition). The opposite happened with the parapsychology literature, i.e. low prior, high posterior. Let’s just say that it clearly did not confirm my preconceptions; lolol.
Lastly, towards the esoteric end: All roads lead to Rome, if you’ll pardon a Catholicism. If they don’t it’s not because the world is mad qua mad; it is because it is, alas, sinful. An easy way to get to hell is to fall into a fully-general-counterargument blackhole, or a literal blackhole maybe. Those things freak me out.
(P.S. My totally obnoxious arrogance is mostly just a passive aggressive way of trolling LW. I’m not actually a total douchebag IRL. /recursive-compulsive-self-justification)
Explain?
Explain?
Elaborate?
I love how Less Wrong basically thinks that all evidence that doesn’t support its favored conclusion is bad because it just leads to confirmation bias. “The evidence is on your side, granted, but I have a fully general counterargument called ‘confirmation bias’ that explains why it’s not actually evidence!” Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn’t actually exist. (Eliezer knew about the controversy, which is why his post is titled “Positive Bias”, which arguably also doesn’t exist, especially not in a cognitively relevant way.) Then they talk about Occam’s razor while completely failing to understand what algorithmic probability is actually saying. Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable! It’s like they’re trolling and I’m not supposed to feed them but they look sort of like a very hungry, incredibly stupid puppy.
Explain?
http://library.mpib-berlin.mpg.de/ft/gg/gg_how_1991.pdf is exemplary of the stuff I’m thinking of. Note that that paper has about 560 citations. If you want to learn more then dig into the literature. I really like Gigerenzer’s papers as they’re well-cited and well-reasoned, and he’s a statistician. He even has a few papers about how to improve rationality, e.g. http://library.mpib-berlin.mpg.de/ft/gg/GG_How_1995.pdf has over 1,000 citations.
Searching and skimming, the first link does not seem to actually say that confirmation bias does not exist. It says that it does not appear to be the cause of “overconfidence bias”—it seems to take no position on whether it exists otherwise.
Okay, yeah, I was taking a guess. There are other papers that talk about confirmation/positive bias specifically, a lot of in the vein of this kinda stuff. Maybe Kaj’s posts called ‘Heuristics and Biases Biases?’ from here on LW references some relevant papers too. Sorry, I have limited cognitive resources at the moment, I’m mostly trying to point in the general direction of the relevant literature because there’s quite a lot of it.
Hard to know whether to agree or disagree without knowing “more probable than what?”
Sorry. More probable than supernaturalistic universes of the sort that the majority of humans finds more likely (where e.g. psi phenomena exist).
So I think you’re quite right in that “supernatural” and “natural” are sets that contain possible universes of very different complexity and that those two adjectives are not obviously relevant to the complexity of the universes they describe. I support tabooing those terms. But if you compare two universes, one of which is described most simply by the wave function and an initial state, and another which is described by the wave function, an initial state and another section of code describing the psychic powers of certain agents the latter universe is a priori more unlikely (bracketing for the moment the simulation issue), Obviously if psi phenomenon can be incorporated into the physical model without adding additional lines of code that’s another matter entirely.
Returning to the simulation issue I take your position to be that there are conceivable “meta-physics” (meant literally; not necessarily referring to the branch of philosophy) which can make local complexities more common? Is that a fair restatement? I have a suspicion that this is not possibly without paying the complexity back at the other end, though I’m not sure.
Boltzmann brain, maybe?
Explain?
What was said that’s a synonym for or otherwise invoked the confirmation bias?
It’s mentioned a few times in this thread re AspiringKnitter’s evidence for Christianity. I’m too lazy to link to them, especially as it’d be so easy to get the answer to your question with control+f “confirmation” that I’m not sure I interpreted it correctly?
Just to echo the others that brought this up, I applaud your courage; few people have the guts to jump into the lions’ den, as it were. That said, I’m going to play the part of the lion (*) on this topic.
How do you know that these sensations come from a supernatural entity, and not from your own brain ? I know that if I started experiencing odd physical sensations, no matter how pleasant, this would be my first hypothesis (especially since, in my personal case, the risk of stroke is higher than average). In fact, if I experienced anything that radically contradicted my understanding of the world, I’d probably consider the following explanations, in order of decreasing likelihood:
I am experiencing some well-known cognitive bias.
My brain is functioning abnormally and thus I am experiencing hallucinations.
Someone is playing a prank on me.
Shadowy human agencies are testing a new chemical/biological/emissive device on me.
A powerful (yet entirely material) alien is inducing these sensations, for some reason.
A trickster spirit (such as a Kami, or the Coyote, etc.) is doing the same by supernatural means.
A localized god is to blame (Athena, Kali, the Earth Mother, etc.)
An omniscient, omnipotent, and generally all-everything entity is responsible.
This list is not exhaustive, obviously, it’s just some stuff I came up with off the top of my head. Each next bullet point is less probable than the one before it, and thus I’d have to reject pretty much every other explanation before arriving at “the Christian God exists”.
(*) Or a bobcat, at least.
Is either of those well-known? What about the pattern with which they’re felt? Sound like anything you know? Me neither.
That don’t have any other effect? That remain stable for years? With no other sign of mental illness? Besides, if I set out by assuming that I can’t tell anything because I’m crazy anyway, what good does that do me? It doesn’t tell me what to predict. It doesn’t tell me what to do. All it tells me is “expect nothing and believe nothing”. If I assume it’s just these hallucinations and everything else is normal, then I run into “my brain is functioning abnormally and I am experiencing hallucinations that tell me Christian doctrine is true even when I don’t know the doctrine in question”, which is the original problem you’re trying to explain.
And instead of messing with me like a real trickster, it convinces me to worship something other than it and in so doing increases my quality of life?
You’ve read xkcd?
In addition to dlthomas’s suggestion of the affect heuristic, I’d suggest something like the ideomotor effect amplified by confirmation bias.
However, there’s a reason I put “cognitive bias” as the first item on my list: I believe that it is overwhelmingly more likely than any alternatives. Thus, it would take a significant amount of evidence to convince me that I’m not laboring under such a bias, even if the bias does not yet have a catchy name.
AFAIK some brain cancers can present this way. In any case, if I started experiencing unusual physical symptoms all of a sudden, I’d consult a medical professional. Then I’d write down the results of his tests, and consult a different medical professional, just in case. Better safe than sorry.
Trickster spirits (especially Tanuki or Kitsune) rarely demand worship; messing with people is enough for them. Some such spirits are more or less benign; the Tanuki and Raven both would probably be on board with the idea of tricking a human into improving his or her life.
That said, you skipped over human agents and aliens, both of which are IMO overwhelmingly more likely to exist than spirits (though that doesn’t make them likely to exist in absolute terms).
Hadn’t everyone ? :-)
.
It sounds a little like the affect heuristic.
AspiringKnitter, what do you think about people who have sensory experiences that indicate that some other religion or text is correct?
Do they actually exist?
Well, as best I can tell my maintainer didn’t install the religion patch, so all I’m working with is the testaments of others; but I have seen quite a variety of such testaments. Buddhism and Hinduism have a typology of religious experience much more complex than anything I’ve seen systematically laid down in mainline Christianity; it’s usually expressed in terms unique to the Dharmic religions, but vipassanā for example certainly seems to qualify as an experiential pointer to Buddhist ontology.
If you’d prefer Western traditions, a phrase I’ve heard kicked around in the neopagan, reconstructionist, and ceremonial magic communities is “unsubstantiated personal gnosis”. While that’s a rather flippant way of putting it, it also seems to point to something similar to your experiences.
Huh, interesting. I should study that in more depth, then.
Careful, you may end up like Draco in HPMoR chapter 23, without a way to gom jabbar the guilty parties (sorry about the formatting):
Nah, false beliefs are worthless. That which is true is already so; owning up to it doesn’t make it worse. If I turned out to actually be wrong—well, I have experience being wrong about religion. I’d probably react just like I did before.
Feel free to elaborate or link if you have talked about it before.
I used to be an atheist before realizing that was incorrect. I wasn’t upset about that; I had been wrong, I stopped being wrong. Is that enough?
Intriguing. I wonder what made you see the light.
Sure. Pick a religion.
God does not solve this problem.
It sounded like she was already coming down on the side of the good being good because it is commanded by God when she said, “an innate morality that seems, possibly, somewhat arbitrary.”
So maybe the dilemma is not such a problem for her.
.
I can understand your hesitation about telling that story. Thanks for sharing it.
Some questions, if you feel like answering them:
Can you give me some examples of things you hadn’t known Christian doctrine considered Bad before you sensed them as A?
If you were advising someone who lacks the ability to sense Good and Bad directly on how to have accurate beliefs about what’s Good and Bad, what advice would you give? (It seems to follow from what you’ve said elsewhere that simply telling them to believe Christianity isn’t sufficient, since lots of people sincerely believe they are following the directive to “believe Christianity” and yet end up believing Bad things. It seems something similar applies to “believe the New Testament”. Or does it?)
If you woke up tomorrow and you experienced sensation A in situations that were consistent with Christianity being true, and experienced sensation B in situations that were consistent with Islam being true, what would you conclude about the world based on those experiences?
** EDIT: My original comment got A and B reversed. Fixed.
Upvoted for courage.
.
I think that should probably be AspiringKnitter’s call. (I don’t think you’re pushing too hard, given the general norms of this community, but I’m not sure of what our norms concerning religious discussions are.)
If you want it to be my call, then I say go ahead.
Do you currently get a “Bad” signal on other holy books?
Do you get it when you don’t know it’s another holy book?
Let’s try that! I got a Bad signal on the Koran and a website explaining the precepts of Wicca, but I knew what both of those were. I would be up for trying a test where you give me quotes from the Christian Bible (warning: I might recognize them; if so, I’ll tell you, but for what it’s worth I’ve only read part of Ezekiel, but might recognize the story anyway… I’ve read a lot of the Bible, actually), other holy books and neutral sources like novels (though I might have read those, too; I’ll tell you if I recognize them), without telling me where they’re from. If it’s too difficult to find Biblical quotes, other Christian writings might serve, as could similar writings from other religions. I should declare up front that I know next to nothing about Hinduism but once got a weak Good reading from what someone said about it. Also, I would prefer longer quotes; the feelings build up from unnoticeable, rather than hitting full-force instantly. If they could be at least as long as a chapter of the Bible, that would be good.
That is, if you’re actually proposing that we test this. If you didn’t really want to, sorry. It just seems cool.
Upvoted for the willingness to test, and in general for being a good sport.
Try this one:
The preparatory prayer is made according to custom.
The first prelude will be a certain historical consideration of ___ on the one part, and __ on the other, each of whom is calling all men to him, to be gathered together under his standard.
The second is, for the construction of the place, that there be represented to us a most extensive plain around Jerusalem, in which ___ stands as the Chief-General of all good people. Again, another plain in the country of Babylon, where ___ presents himself as the captain of the wicked and [God’s] enemies.
The third, for asking grace, will be this, that we ask to explore and see through the deceits- of the evil captain, invoking at the same time the Divine help in order to avoid them ; and to know, and by grace be able to imitate, the sincere ways of the true and most excellent General, ___ .
The first point is, to imagine before my eyes, in the Babylonian plain, the captain of the wicked, sitting in a chair of fire and smoke, horrible in figure, and terrible in countenance.
The second, to consider how, having as sembled a countless number of demons, he disperses them through the whole world in order to do mischief; no cities or places, no kinds of persons, being left free.
The third, to consider what kind of address he makes to his servants, whom he stirs up to seize, and secure in snares and chains, and so draw men (as commonly happens) to the desire of riches, whence afterwards they may the more easily be forced down into the ambition of worldly honour, and thence into the abyss of pride.
Thus, then, there are three chief degrees of temptation, founded in riches, honours, and pride; from which three to all other kinds of vices the downward course is headlong.
If I had more of the quote, it would be easier. I get a weak Bad feeling, but while the textual cues suggest it probably comes from either the Talmud or the Koran, and while I think it is, I’m not getting a strong feeling on this quote, so this makes me worry that I could be confused by my guess as to where it comes from.
But I’m going to stick my neck out anyway; I feel like it’s Bad.
That is what I had expected. St. Ignatius is a Catholic frequently derided by non-Catholic fundamentalist Christians.
I think it’s here
I admit to being surprised that this is a Christian writing.
What do you think of this; it’s a little less obscure:
Your wickedness makes you as it were heavy as lead, and to tend downwards with great weight and pressure towards hell; and if [God] should let you go, you would immediately sink and swiftly descend and plunge into the bottomless gulf, and your healthy constitution, and your own care and prudence, and best contrivance, and all your righteousness, would have no more influence to uphold you and keep you out of hell, than a spider’s web would have to stop a falling rock. Were it not that so is the sovereign pleasure of [God], the earth would not bear you one moment; for you are a burden to it; the creation groans with you; the creature is made subject to the bondage of your corruption, not willingly; the sun don’t willingly shine upon you to give you light to serve sin and [the evil one]; the earth don’t willingly yield her increase to satisfy your lusts; nor is it willingly a stage for your wickedness to be acted upon; the air don’t willingly serve you for breath to maintain the flame of life in your vitals, while you spend your life in the service of [God]‘s enemies. [God]‘s creatures are good, and were made for men to serve [God] with, and don’t willingly subserve to any other purpose, and groan when they are abused to purposes so directly contrary to their nature and end. And the world would spew you out, were it not for the sovereign hand of him who hath subjected it in hope. There are the black clouds of [God]’s wrath now hanging directly over your heads, full of the dreadful storm, and big with thunder; and were it not for the restraining hand of [God] it would immediately burst forth upon you. The sovereign pleasure of [God] for the present stays his rough wind; otherwise it would come with fury, and your destruction would come like a whirlwind, and you would be like the chaff of the summer threshing floor.
Bad? I think Bad, but wish I had more of the quote.
That moderately surprises me. It’s from “Sinners in the hands of an angry god” by Johnathan Edwards.
I recognized it by the first sentence, but then I have read it several times. (For those of you that haven’t heard of it, it is probably the most famous American sermon, delivered in 1741.)
I think it’s this.
Huh! How about this:
… the mysterious (tablet)…is surrounded by an innumerable company of angels; these angels are of all kinds, — some brilliant and flashing , down to . The light comes and goes on the tablet; and now it is steady...
And now there comes an Angel, to hide the tablet with his mighty wing. This Angel has all the colours mingled in his dress; his head is proud and beautiful; his headdress is of silver and red and blue and gold and black, like cascades of water, and in his left hand he has a pan-pipe of the seven holy metals, upon which he plays. I cannot tell you how wonderful the music is, but it is so wonderful that one only lives in one’s ears; one cannot see anything any more.
Now he stops playing and moves with his finger in the air. His finger leaves a trail of fire of every colour, so that the whole Aire is become like a web of mingled lights. But through it all drops dew.
(I can’t describe these things at all. Dew doesn’t represent what I mean in the least. For instance, these drops of dew are enormous globes, shining like the full moon, only perfectly transparent, as well as perfectly luminous.) … All this while the dewdrops have turned into cascades of gold finer than the eyelashes of a little child. And though the extent of the Aethyr is so enormous, one perceives each hair separately, as well as the whole thing at once. And now there is a mighty concourse of angels rushing toward me from every side, and they melt upon the surface of the egg in which I am standing __, so that the surface of the egg is all one dazzling blaze of liquid light.
Now I move up against the tablet, — I cannot tell you with what rapture. And all the names of __, that are not known even to the angels, clothe me about. All the seven senses are transmuted into one sense, and that sense is dissolved in itself …
Neutral/no idea.
This is it
Huh. Odd.
Yes, I was trying to figure out how much of the feeling had to do with lack of Hell (answer: not all of it). The Tarot does fit the pattern.
? I’m confused.
Good for you. ^_^
You had a Bad feeling about two Christian quotes that mentioned Hell or demons/hellfire. You also got a Good feeling about a quote from Nietzsche that didn’t mention Hell. I don’t know the context of your reactions to the Tarot and Wicca, but obviously people have linked those both to Hell. (See also Horned God, “Devil” trump.) So I wanted to get your reaction to a passage with no mention of Hell from an indeterminate religion, in case that sufficed to make it seem Good.
The author designed a famous Tarot deck, and inspired a big chunk (at minimum) of Wicca.
I hadn’t considered that hypothesis. I’d upvote for the novel theory, but now that you’ve told me you’ll never be able to trust further reactions that could confirm or deny it, which seems like it’s worth a downvote, so not voting your post up or down. That said, I think this fails to explain having a Bad reaction to this page and the entire site it’s on, despite thinking before reading it that Wicca was foofy nonsense and completely not expecting to find evil of that magnitude (a really, really strong feeling—none of the quotes you guys have asked me about have been even a quarter that bad). It wasn’t slow, either; unlike most other things, it was almost immediately obvious. (The fact that this has applied to everything else I’ve ever read about Wicca since—at least, everything written by Wiccans about their own religion—could have to do with expectation, so I can see where you wouldn’t regard subsequent reactions as evidence… but the first one, at least, caught me totally off-guard.)
I know who Crowley is. (It was his tarot deck that someone gave me as a gift—and I was almost happy about it, because I’d actually been intending to research tarot because it seemed cool and I meant to use the information for a story I was writing. But then I felt like, you know, Bad, so I didn’t end up using it.) That’s why I was surprised not to have a bad feeling about his writings.
One more, then I’ll stop.
Man is a rope tied between beast and [superior man] - a rope over an abyss. A dangerous across, a dangerous on-the-way, a dangerous looking-back, a dangerous shuddering and stopping.
What is great in man is that he is a bridge and not a goal: what is lovable in man is that he is an overture and a going under.
I love those that know not how to live except by going under, for they are those who cross over.
I love the great despisers, because they are the great reverers, and arrows of longing for the other shore.
I love those who do not first seek a reason beyond the stars for going under and being sacrifices, but sacrifice themselves to the earth, that the earth may some day become the [superior man’s].
I love him who lives to know, and wants to know so that the [superior man] may live some day. Thus he wants to go under.
I love him who works and invents to build a house for the [superior man] and to prepare earth, animal, and plant for him: for thus he wants to go under.
I love him who loves his virtue: for virtue is the will to go under, and an arrow of longing.
I love him who does not hold back one drop of spirit for himself, but wants to be entirely the spirit of his virtue: thus he strides over the bridge as spirit.
I love him who makes his virtue his addiction and catastrophe: for his virtue’s sake he wants to live on and to live no longer.
I love him who does not want to have too many virtues. One virtue is more virtue than two, because it is more of a noose on which his catastrophe may hang.
I love him whose soul squanders itself, who wants no thanks and returns none: for he always gives away, and does not want to preserve himself.
I love him who is abashed when the dice fall to make his fortune, and who asks: “Am I a crooked gambler?” For he wants to perish.
I love him who casts golden words before his deed, and always does more than he promises: for he wants to go under.
I love him who justifies future and redeems past generations: for he wants to perish of the present.
I love him who chastens his God, because he loves his God: for he must perish of the wrath of his God.
I love him whose soul is deep even in being wounded, and who can perish of a small experience: thus he gladly goes over the bridge.
I love him whose soul is so overfull that he forgets himself, and all things are in him: thus all things spell his going under.
I love him who has a free spirit and a free heart: thus his head is only the entrails of his heart, but his heart causes him to go under.
I love all who are as heavy drops, falling one by one out of the dark cloud that hangs over men: they herald the advent of lightning, and, as heralds, they perish.
Behold, I am a herald of the lightning, and a heavy drop from the cloud: but this lightning is called [superior man].
I know very little about Nietzsche, but I recognized this instantly because the first three lines were quoted in Sid Meier’s Alpha Centauri. :-)
I get a moderate Good reading (?!) and I’m confused to get it because the morality the person is espousing seems wrong. I’m guessing this comes from someone’s writings about their religion, possibly an Eastern religion?
Walter Kaufman (Nietzsche’s translator here) prefers overman as the best translation of ubermensch.
ETA: This is some interesting commentary on the work
I’m surprised. I’d heard Nietzsche was not a nice person, but had also heard good things about him… huh. I’ll have to read his work, now. I wonder if the library has some.
Niezsche’s sister was an anti-semite and a German nationalist. After Nietzsche’s death, she edited his works into something that became an intellectual foundation for Nazism. Thus, he got a terrible reputation in the English speaking world.
It’s tolerable clear from a reading of his unabridged works that Nietzsche would have hated Nazism. But he would not have identified himself as Christian (at least as measured by a typical American today). He went mad before he died, and the apocryphal tale is that the last thing he did before being institutionalize was to see a horse being beaten on the street and moving to protect it.
To see his moral thought, you could read Thus Spake Zarathustra. To see why he isn’t exactly Christian, you can look at The Geneology of Morals. Actually, you might also like Kierkegaard because he expresses somewhat similar thoughts, but within a Christian framework.
To really see why he isn’t Christian, read The Antichrist.
As with what he wrote in Genealogy of Morals, it is unclear how tongue-in-cheek/intentional provocative Nietzsche is being. I’m honestly not sure whether Nietzsche thought the “master morality” was better or worse than the “slave morality.”
The sense I get—but note that it’s been a couple of years since I’ve read any substantial amount of Nietzsche—is that he treats master morality as more honest, and perhaps what we could call psychologically healthier, than slave morality, but does not advocate that the former be adopted over the latter by people living now; the transition between the two is usually explained in terms of historical changes. The morality embodied by his superior man is neither, or a synthesis of the two, and while he says a good deal about what it’s not I don’t have a clear picture of many positive traits attached to it.
That’s because the superman, by definition, invents his own morality. If you read a book telling you the positive content of morality and implement it because the eminent philosopher says so, you ain’t superman.
I wouldn’t call him a fully sane person, especially in his later work (he suffered in later life from mental problems most often attributed to neurosyphilis, and it shows), but he has a much worse reputation than I think he really deserves. I’d recommend Genealogy of Morals and The Gay Science; they’re both laid out a bit more clearly than the works he’s most famous for, which tend to be heavily aphoristic and a little scattershot.
It’s easy to find an equally forceful bit by Nietzsche that’s not been quoted to death, really. Had AK recognized it, you would’ve botched a perfectly good test.
It’s being a long time since I read that… I guess Nietzsche wouldn’t have found “moderation in all things” too appealing...
Cute.
Because I’m curious
Fairly read as a whole and in the context of the trial, the instructions required the jury to find that Chiarella obtained his trading advantage by misappropriating the property of his employer’s customers. The jury was charged that,
Record 677 (emphasis added). The language parallels that in the indictment, and the jury had that indictment during its deliberations; it charged that Chiarella had traded “without disclosing the material non-public information he had obtained in connection with his employment.” It is underscored by the clarity which the prosecutor exhibited in his opening statement to the jury. No juror could possibly have failed to understand what the case was about after the prosecutor said:
“In sum, what the indictment charges is that Chiarella misused material nonpublic information for personal gain and that he took unfair advantage of his position of trust with the full knowledge that it was wrong to do so. That is what the case is about. It is that simple.”
Id. at 46. Moreover, experienced defense counsel took no exception and uttered no complaint that the instructions were inadequate in this regard. [Therefore, the conviction is due to be affirmed].
I get no reading here. My guess is that this is some sort of legal document, in which case I’m not really surprised to get no reading. Is that correct?
Yes, it is a legal document. Specifically a dissent from the reversal of a criminal conviction. In particular, I think the quoted text is an incredibly immoral and wrong-headed understanding of American criminal law. Which makes it particularly depressing that the writer was Chief Justice when he wrote it
With, I assume, the names changed? Otherwise it seems too easy :-P
Yes, where names need to be changed. [God] will be sufficient to confuse me as to whether it’s “the LORD” or “Allah” in the original source material. There might be a problem with substance in very different holy books where I might be able to guess the religion just by what they’re saying (like if they talk about reincarnation or castes, I’ll know they’re Hindu or Buddhist). I hope anyone finding quotes will avoid those, of course.
This is a bit off-topic, but, out of curiosity, is there anything in particular that you find objectionable about Wicca on a purely analytical level ? I’m not saying that you must have such a reason, I’m just curious.
Just in the interests of pure disclosure, the reason I ask is because I found Wicca to be the least harmful religion among all the religions I’d personally encountered. I realize that, coming from an atheist, this doesn’t mean much, of course...
Assuming you mean besides the fact that it’s wrong (by both meanings—incorrect and sinful), then no, nothing at all.
I’m actually not entirely sure what you mean by “incorrect”, and how it differs from “sinful”. As an atheist, I would say that Wicca is “incorrect” in the same way that every other religion is incorrect, but presumably you’d disagree, since you’re religious.
Some Christians would say that Wicca is both “incorrect” and “sinful” because its followers pray to the wrong gods, since a). YHVH/Jesus is the only God who exists, thus worshiping other (nonexistent) gods is incorrect, and b). he had expressly commanded his followers to worship him alone, and disobeying God is sinful. In this case, though, the “sinful” part seems a bit redundant (since Wiccans would presumably worship Jesus if they were convinced that he existed and their own gods did not). But perhaps you meant something else ?
I mean incorrect in that they believe things that are wrong, yes; they believe in, for instance, a goddess who doesn’t really exist. And sinful because witchcraft is forbidden.
Wouldn’t this imply that witchcraft is effective, though ? Otherwise it wouldn’t be forbidden; after all, God never said (AFAIK), “you shouldn’t pretend to cast spells even though they don’t really work”, nor did he forbid a bunch of other stuff that is merely silly and a waste of time. But if witchcraft is effective, it would imply that it’s more or less “correct”, which is why I was originally confused about what you meant.
FWIW, I feel compelled to point out that some Wiccans believe in multiple gods or none at all, even though this is off-topic—since I can practically hear my Wiccan acquaintances yelling at me in the back of my head… metaphorically speaking, that is.
Yes.
Which is still wrong.
Ok, but in that case, isn’t witchcraft at least partially “correct” ? Otherwise, how can they cast all those spells and make them actually work (assuming, that is, that their spells actually do work) ?
By consorting with demons.
Ah, right, so you believe that the entities that Wiccans worship do in some way exist, but that they are actually demons, not benign gods.
I should probably point out at this point that Wiccans (well, at least those whom I’d met), consider this point of view utterly misguided and incredibly offensive. No one likes to be called a “demon-worshiper”, especially when one is generally a nice person whose main tenet in life is a version of “do no harm”. You probably meant no disrespect, but flat-out calling a whole group of people “demon-worshipers” tends to inflame passions rather quickly, and not in a good way.
That’s a bizarre thing to say. Is their offense evidence that I’m wrong? I don’t think so; I’d expect it whether or not they worship demons. Or should I believe something falsely because the truth is offensive? That would go against my values—and, dare I say it, the suggestion is offensive. ;) Or do you want me to lie so I’ll sound less offensive? That risks harm to me (it’s forbidden by the New Testament) and to them (if no one ever tells them the truth, they can’t learn), as well as not being any fun.
What is true is already so, Owning up to it doesn’t make it worse. Not being open about it doesn’t make it go away.
Nice people like that deserve truth, not lies, especially when eternity is at stake.
So does calling people Cthulhu-worshipers. But when you read that article, you agreed that it was apt, right? Because you think it’s true. You guys sure seem quick to tell me that my beliefs are offensive, but if I said the same to you, you’d understand why that’s beside the point. If Wiccans worship demons, I desire to believe that Wiccans worship demons; if Wiccans don’t worship demons, I desire to believe that Wiccans don’t worship demons. Sure, it’s offensive and un-PC. If you want me to stop believing it, tell me why you think it’s wrong.
I like your post (and totally agree with the first paragraph), but have some concerns that are a little different from Bugmaster’s.
What’s the exact difference between a god and a demon? Suppose Wicca is run by a supernatural being (let’s call her Astarte) who asks her followers to follow commendable moral rules, grants their petitions when expressed in the ritualistic form of spells, and insists she will reward the righteous and punishes the wicked. You worship a different supernatural being who also asks His followers to follow commendable moral rules, grants their petitions when expressed in the ritualistic form of prayer, and insists He will reward the righteous and punish the wicked. If both Jehovah and Astarte exist and act similarly, why name one “a god” and the other “a demon”? Really, the only asymmetry seems to be that Jehovah tries to inflict eternal torture on people who prefer Astarte, where Astarte has made no such threats among people who prefer Jehovah, which is honestly advantage Astarte. So why not just say “Of all the supernatural beings out there, some people prefer this one and other people prefer that one”?
I mean, one obvious answer is certainly to list the ways Jehovah is superior to Astarte—the one created the Universe, the other merely lives in it; the one is all-powerful, the other merely has some magic; the one is wise and compassionate, the other evil and twisted. But all of these are Jehovah’s assertions. One imagines Astarte makes different assertions to her followers. The question is whose claims to believe.
Jehovah has a record of making claims which seem to contradict the evidence from other sources—the seven-day creation story, for example. And He has a history of doing things which, when assessed independently of their divine origin, we would consider immoral—the Massacre of the Firstborn in Exodus, or sanctioning the rape, enslavement, infanticide, and genocide of the Canaanites. So it doesn’t seem obvious at all that we should trust His word over Astarte’s, especially since you seem to think that Astarte’s main testable claim—that she does magic for her followers—is true.
Now, you’ve already said that you believe in Christianity because of direct personal revelation—a sense of serenity and rightness when you hear its doctrines, and a sense of repulsion from competing religions, and that this worked even when you didn’t know what religion you were encountering and so could not bias the result. I upvoted you when you first posted this because I agree that such feelings could provide some support for religious belief. But that was before you said you believed in competing supernatural beings. Surely you realize how difficult a situation that puts you in?
Giving someone a weak feeling of serenity or repulsion is, as miracles go, not a very flashy one. One imagines it would take only simple magic, and should be well within the repertoire of even a minor demon or spirit. And you agree that Astarte performs minor miracles of the same caliber all the time to try to convince her own worshippers. So all that your feelings indicate is that some supernatural being is trying to push you toward Christianity. If you already believe that there are multiple factions of supernatural beings, some of whom push true religions and others of whom push false ones, then noticing that some supernatural being is trying to push you toward Christianity provides zero extra evidence that Christianity is true.
Why should you trust the supernatural beings who have taken an interest in your case, as opposed to the supernatural beings apparently from a different faction who caused the seemingly miraculous revelations in this person and this person’s lives?
Since you use the names Jehovah and Astarte, I’ll follow suit, though they’re not the names I prefer.
The difference would be that if worship of Jehovah gets you eternal life in heaven, and worship of Astarte gets you eternal torture and damnation, then you should worship Jehovah and not Astarte. Also, if Astarte knows this, but pretends otherwise, then Astarte’s a liar.
Not quite. I only believe in “multiple factions of supernatural beings” (actually only two) because it’s implied by Christianity being true. It’s not a prior belief. If Christianity is false, one or two or fifteen or zero omnipotent or slightly-powerful or once-human or monstrous gods could exist, but if Christianity is false I’d default to atheism, since if my evidence for Christianity proved false (say, I hallucinated it all because of some undiagnosed mental illness that doesn’t resemble any currently-known mental illness and only causes that one symptom) without my gaining additional evidence for some other religion or non-atheist cosmology, I’d have no evidence for anything spiritual. Or do I misunderstand? I’m confused.
Being, singular, first of all.
I already know myself, what kind of a person I am. I know how rational I am. I know how non-crazy I am. I know exactly the extent to which I’ve considered illness affecting my thoughts as a possible explanation.
I know I’m not lying.
The first person became an apostate, something I’ve never done, and is still confused years later. The second person records only the initial conversion, while I know how it’s played out in my own life for several years.
The second person is irrationally turned off by even the mere appearance of Catholicism and Christianity in general because of terrible experiences with Catholics.
I discount all miracle stories from people I don’t know, including Christian and Jewish miracle stories, which could at least plausibly be true. I discount them ALL when I don’t know the person. In fact, that means MOST of the stories I hear and consider unlikely (without passing judgment when I have so little info) are stories that, if true, essentially imply Christianity, while others would provide evidence for it.
And knowing how my life has gone, I know how I’ve changed as a person since accepting Jesus, or Jehovah if that’s the word you prefer. They don’t mention drastic changes to their whole personalities to the point of near-unrecognizability even to themselves. In brief: I was unbelievably awful. I was cruel, hateful, spiteful, vengeful and not a nice person. I was actively hurtful toward everyone, including immediate family. After finding Jesus, I slowly became a less horrible person, until I got to where I am now. Self-evaluation may be somewhat unreliable, but I think the lack of any physical violence recently is a good sign. Also, rather than escalating arguments as far as possible, when I realize I’ve lashed out, I deliberately make an effort not to fall prey to consistency bias and defend my actions, but to stop and apologize and calm down. That’s something I would not have done—would not have WANTED to do, would not have thought was a good idea, before.
I don’t know (I only guess) what Astarte does to xyr worshipers. I’m conjecturing; I’ve never prayed to xem, nor have I ever been a Wiccan or any other type of non-Christian religion. But I think I ADBOC this statement; if said by me, it would have sounded more like “Satan makes xyrself look very appealing”.
(I’m used to a masculine form for this being. You’re using a feminine form. Rather than argue, I’ve simply shifted my pronoun usage to an accurate—possibly more accurate—and less loaded set of pronouns.)
Also, my experience suggests that if something is good or evil, and you’re open to the knowledge, you’ll see through any lies or illusions with time. It might be a lot of time—I’ll confess I recently got suckered into something for, I think, a couple of years, when I really ought to have known better much sooner, and no, I don’t want to talk about it—but to miss it forever requires deluding yourself.
(Not, as we all know, that self-delusion is particularly rare...)
That someone is trying to convince me to be a Christian or that I perceive the nature of things using an extra sense.
Strength varies. Around the time I got to the fourth Surah of the Koran, it was much flashier than anything I’ve seen since, including everything previously described (on the negative side) at incredible strength plus an olfactory hallucination. And the result of, I think, two days straight of Bible study and prayer at all times constantly… well, that was more than a weak feeling of serenity. But on its own it’d be pretty weak evidence, because I was only devoting so much time to prayer because my state of mind was so volatile and my thoughts and feelings were unreliable. It’s only repetitions of that effect that let me conclude that it means what I’ve already listed, after controlling for other possibilities that are personal so I don’t want to talk about it. Those are rare extremes, though; normally it’s not as flashy as those.
I consider it way likelier than you do, anyway. I’m only around fiftyish percent confidence here. But that’s only one aspect of it. Their religion also claims to cause changes in its followers along the lines of “more in tune with the Divine” or something, right? So if there are any overlapping claims about morality, that would also be testable—NOT absolute morality of the followers, but change in morality on mutually-believed-in traits, measuring before and after conversion, then a year on, then a few years on, then several years on. Of course, I’m not sure how you’ll ever get the truth about how moral people are when they think no one’s watching...
Sorry—I used “Astarte” and the female pronoun because the Wiccans claim to worship a Goddess, and Astarte was the first female demon I could think of. If we’re going to go gender-neutral, I recommend “eir”, just because I think it’s the most common gender neutral pronoun on this site and there are advantages to standardizing this sort of thing.
Well, okay, but this seems to be an argument from force, sort of “Jehovah is a god and Astarte a demon because if I say anything else, Jehovah will torture me”. It seems to have the same form as “Stalin is not a tyrant, because if I call Stalin a tyrant, he will kill me, and I don’t want that!”
It sounds like you’re saying the causal history of your belief should affect the probability of it being true.
Suppose before you had any mystical experience, you had non-zero probabilities X of atheism, Y of Christianity (in which God promotes Christianity and demons promote non-Christian religions like Wicca), and Z of any non-Christian religion (in which God promotes that religion and demons promote Christianity).
Then you experience an event which you interpret as evidence for a supernatural being promoting Christianity. This should raise the probability of Y and Z an equal amount, since both theories seem to equally predict this would happen.
You could still end up a Christian if you started off with a higher probability Y than Z, but it sounds like you weren’t especially interested in Christianity before your mystical experience, and the prior for Z is higher than Y since there are so many more non-Christian than Christian religions.
I understand you as having two categories of objections: first, objections that the specific people in the Islamic conversion stories are untrustworthy or their stories uninteresting (3,4,6). Second, that you find mystical experiences by other people inherently hard to believe but you believe your own because you are a normal sane person (1,2,5).
The first category of objections apply only to those specific people’s stories. That’s fair enough since those were the ones I presented, but they were the ones I presented because they were the first few good ones I found in the vast vast vast vast VAST Islamic conversion story literature. I assume that if you were to list your criteria for believability, we could eventually find some Muslim who experienced a seemingly miraculous conversion who fit all of those criteria (including changing as a person) - if it’s important to you to test this, we can try.
The second category of objections is more interesting. Different studies show somewhere from a third to half of Americans having mystical experiences, including about a third of non-religious people who have less incentive to lie. Five percent of people experience them “regularly”. Even granted that some of these people are lying and other people categorize “I felt really good” as a mystical experience, I don’t think denying that these occur is really an option.
The typical view that people need to be crazy, or on the brink of death, or uneducated, or something other than a normal middle class college-educated WASP adult in order to have mystical experiences also breaks down before the evidence. According to Greeley 1975 and Hay and Morisy 1976, well-educated upper class people are more likely to have mystical experiences, and Hay and Morisy 1978 found that people with mystical experiences are more likely to be mentally well-balanced.
Since these experiences occur with equal frequency among people of all religion and even atheists, I continue to think this supports either the “natural mental process” idea or the “different factions of demons” idea—you can probably guess which one I prefer :)
There are 1.57 billion Muslims and 2.2 billion Christians in the world. Barring something very New-Agey going on, at least one of those groups believes an evil lie. The number of Muslims who convert to Christianity at some point in their lives, or vice versa, is only a tiny fraction of a percent. So either only a tiny fraction of a percent of people are open to the knowledge—so tiny that you could not reasonably expect yourself to be among them—or your experience has just been empirically disproven.
(PS: You’re in a lot of conversations at once—let me know if you want me to drop this discussion, or postpone it for later)
Speaking of mystical experiences, my religion tutor at the university (an amazing woman, Christian but pretty rational and liberal) had one, as she told us, in transport one day, and that’s when she converted, despite growing up at an atheistic middle-class Soviet family.
Oh, and the closest thing I ever had to one was when I tried sensory deprivation + dissociatives (getting high on cough syrup, then submersing myself in a warm bath with lights out and ears plugged; had a timer set to 40 minutes and a thin ray of light falling where I could see it by turning my head as precaution against, y’know, losing myself). That experiment was both euphoric and interesting, but I wouldn’t really want to repeat it. I experienced blissful ego death and a feeling of the universe spinning round and round in cycles, around where I would be, but where now was nothing. It’s hard to describe.
And then, well, I saw the tiny, shining shape of Rei Ayanami. She was standing in her white plugsuit amidst the blasted ruins on a dead alien world, and I got the feeling that she was there to restore it to life. She didn’t look at me, but I knew she knew I saw her. Then it was over.
Fret not, I didn’t really make any more bullshit out of that, but it’s certainly an awesome moment to remember.
Unless I know them already. Once I already know people for honest, normal, sane people (“normal” isn’t actually required and I object to the typicalist language), their miracle stories have the same weight as my own. Also, miracles of more empirically-verifiable sorts are believable when vetted by snopes.com.
Xe is poetic and awesome. I’m hoping it’ll become standard English. To that end, I use it often.
I read your first link and I’m very surprised because I didn’t expect something like that. It would be interesting to talk to that person about this.
Is that surprising? First of all, I know that I already converted to Christianity, rather than just having assumed it always, so I’m already more likely to be open to new facts. And second, I thought it was common knowledge around these parts that most people are really, really bad at finding the truth. How many people know Bayes? How many know what confirmation bias is? Anchoring? The Litany of Tarski? Don’t people on this site rail against how low the sanity waterline is? I mean, you don’t disagree that I’m more rational than most Christians and Muslims, right?
Do they do this by using tricks like Multiheaded described? Or by using mystical plants or meditation? (I know there are Christians who think repeating a certain prayer as a mantra and meditating on it for a long time is supposed to work… and isn’t there, or wasn’t there, some Islamic sect where people try to find God by spinning around?) If so, that really doesn’t count. Is there another study where that question was asked? Because if you’re asserting that mystical experiences can be artificially induced by such means in most if not all people, then we’re in agreement.
I was thinking more along the lines of “going to hell is a natural consequence of worshiping Astarte”, analogous to “if I listen to my peers and smoke pot, I won’t be able to sing, whereas if I listen to my mother and drink lots of water, I will; therefore, my mother is right and listening to my peers is bad”. I hadn’t even considered it from that point of view before.
No, I suppose it’s not surprising. I guess I misread the connotations of your claim. Although I am still not certain I agree: I know some very rational and intelligent Christians, and some very rational and intelligent atheists (I don’t really know many Muslims, so I can’t say anything about them). At some point I guess this statement is true by definition, since we can define open-minded as “open-minded enough to convert religion if you have good enough evidence to do so.” But I can’t remember where we were going with this one so I’ll shut up about it.
I was unable to find numerical data on this. I did find some assertions in the surveys that some of the mystical experience was untriggered, I found one study comparing 31 people with triggered mystical experience to 31 people with untriggered mystical experience (suggesting it’s not too hard to get a sample of the latter), and I have heard anecdotes from people I know about having untriggered mystical experience.
Honestly I had never really thought of that as an important difference. Keep in mind that it’s really weird that the brain responds to relatively normal stressors, like fasting or twirling or staying still for two long, by producing this incredible feeling of union with God. Think of how surprising this would be if you weren’t previously aware of it, how complex a behavior this is, as opposed to something simpler like falling unconscious. The brain seems to have this built-in, surprising tendency to have mystical experiences, which can be triggered by a lot of different things.
As someone in the field of medicine, this calls to mind the case of seizures, another unusual mental event which can be triggered in similar conditions. Doctors have this concept called the “seizure threshold”. Some people have low seizure thresholds, other people high seizure thresholds. Various events—taking certain drugs, getting certain diseases, being very stressed, even seeing flashing lights in certain patterns—increases your chance of having a seizure, until it passes your personal seizure threshold and you have one. And then there are some people—your epileptics—who can just have seizures seemingly out of nowhere in the course of everyday life (another example is that some lucky people can induce orgasm at will, whereas most of us only achieve orgasm after certain triggers).
I see mystical experiences as working a lot like seizures—anyone can have one if they experience enough triggers, and some people experience them without any triggers at all. It wouldn’t be at all parsimonous to say that some people have this reaction when they skip a few meals, or stay in the dark, or sit very still, and other people have this reaction when they haven’t done any of these things, but these are caused by two completely different processes.
I mean, if we already know that dreaming up mystical experiences is the sort of thing the brain does in some conditions, it’s a lot easier to expand that to “and it also does that in other conditions” than to say “but if it happens in other conditions, it is proof of God and angels and demons and an entire structure of supernatural entities.”
The (relatively sparse) Biblical evidence suggests an active role of God in creating Hell and damning people to it. For example:
“This is how it will be at the end of the age. The angels will come and separate the wicked from the righteous and throw them into the blazing furnace, where there will be weeping and gnashing of teeth.” (Matthew 13:49)
“Depart from me, you accursed, into the eternal fire that has been prepared for the devil and his angels!” (Matthew 25:41)
“If anyone’s name was not found written in the book of life, that person was thrown into the lake of fire.” (Revelations 20:15)
“God did not spare angels when they sinned, but sent them to hell, putting them into gloomy dungeons to be held for judgment” (2 Peter 2:4)
“Fear him who, after the killing of the body, has power to throw you into hell. Yes, I tell you, fear him.” (Luke 12:5)
That last one is particularly, um, pleasant. And it’s part of why it is difficult for me to see a moral superiority of Jehovah over Astarte: of the one who’s torturing people eternally, over the one who fails to inform you that her rival is torturing people eternally.
To return to something I pointed out far, far back in this thread, this is not analagous. Your mother does not cause you to lose your voice for doing the things she advises you not to do. On the other hand, you presumably believe that god created hell, or at a minimum, he tolerates its existence (unless you don’t think God is omnipotent).
(As an aside, another point against the homogeneity you mistakenly assumed you would find on Lesswrong when you first showed up is that not everyone here is a complete moral anti-realist. For me, that one cannot hold the following three premises without contradiction is sufficient to discount any deeper argument for Christianity:
Inflicting suffering is immoral, and inflicting it on an infinite number of people or for an inifinite duration is infinitely immoral
The Christian God is benevolent.
The Christian God allows the existence of Hell.
Resorting to, “Well, I don’t actually know what hell is” is blatant rationalization.)
You don’t actually need to be a moral realist to make that argument; you just need to notice the tension between the set of behavior implied by the Christian God’s traditional attributes and the set of behavior Christian tradition claims for him directly. That in itself implies either a contradiction or some very sketchy use of language (i.e. saying that divine justice allows for infinitely disproportionate retribution).
I think it’s a weakish argument against anything less than a strictly literalist interpretation of the traditions concerning Hell, though. There are versions of the redemption narrative central to Christianity that don’t necessarily involve torturing people for eternity: the simplest one that I know of says that those who die absent a state of grace simply cease to exist (“everlasting life” is used interchangeably with “heaven” in the Bible), although there are interpretations less problematic than that as well.
The (modern) Orthodox opinion that my tutor relayed to us is that Hell isn’t a place at all, but a condition of the soul where it refuses to perceive/accept God’s grace at all and therefore shuts itself out from everything true and meaningful that can be, just wallowing in despair; it exists in literally no-where, as all creation is God’s, and the refusal of God is the very essence of this state. She dismissed all suggestions of sinners’ “torture” in hell—especially by demonic entities—as folk religion.
(Wait, what’s that, looks like either I misquoted her a little or she didn’t quite give the official opinion...)
http://en.wikipedia.org/wiki/Hell_in_Christian_beliefs#Eastern_Orthodox_concepts_of_hell
I has a confused.
I’ve heard that one too, but I’m not sure how functionally different from pitchforks and brimstone I’d consider it to be, especially in light of the idea of a Last Judgment common to Christianity and Islam.
Oh, there’s a difference alright, one that could be cynically interpreted as an attempt to dodge the issue of cruel and disproportionate punishment by theologians. The version above suggests that God doesn’t ever actively punish anyone at all, He simply refuses to force His way to someone who rejects him, even if they suffer as a result. That’s sometimes assumed to be due to God’s respect for free will.
Yeah. Thing is, we’re dealing with an entity who created the system and has unbounded power within it. Respect for free will is a pretty good excuse, but given that it’s conceivable for a soul to be created that wouldn’t respond with permanent and unspeakable despair to separation from the Christian God (or to the presence of a God whom the soul has rejected, in the other scenario), making souls that way looks, at best, rather irresponsible.
If I remember right the standard response to that is to say that human souls were created to be part of a system with God at its center, but that just raises further questions.
What, so god judges that eternal torture is somehow preferable to violating someones free will by inviting them to eutopia?
I am so tired of theists making their god so unable to be falsified that he becomes useless. Let’s assume for a moment that some form of god actually exists. I don’t care how much he loves us in his own twisted little way, I can think of 100 ways to improve the world and he isn’t doing any of them. It seems to me that we ought to be able to do better than what god has done, and in fact we have.
The standard response to theists postulating a god should be “so what?”.
’s cool, bro, relax. I agree completely with that, I’m just explaining what the other side claims.
Actually, I do. You use the language that rationalists use. However, you don’t seem to have considered very many alternate hypothesis. And you don’t seem to have performed any of the obvious tests to make sure you’re actually getting information out of your evidence.
For instance, you could have just cut up a bunch of similarly formatted stories from different sources, (or even better, have had a third party do it for you, so you don’t see it,) stuck them in a box and pulled them out at random—sorting them into Bible and non-Bible piles according to your feelings. If you were getting the sort of information out that would go some way towards justifying your beliefs, you should easily beat random people of equal familiarity with the Bible.
Rationality is a tool, and if someone doesn’t use it, then it doesn’t matter how good a tool they have; they’re not a rationalist any more than someone who owns a gun is a soldier. Rationalists have to actually go out and gather/analyse the data.
(Edit to change you to someone for clarity’s sake.)
No, I couldn’t have for two reasons. By the time I could have thought of it I would have recognized nearly all the Bible passages as Biblical and to obscure meaning would require such short quotes I’d never be able to tell. Those are things I already explained—you know, in the post where I said we should totally test this, using a similar experiment.
If that’s the stance you’re going to take, it seems destructive to the idea that I should consider you rational. You proposed a test to verify your belief that could not be performed; in the knowledge that, if it was, it would give misleading results.
Minor points: There’s more than just one bible out there. Unless you’re a biblical scholar, the odds that there’s nothing from a bible that you haven’t read are fairly slim.
‘nearly all’ does leave you with some testable evidence. The odds that it just happens to be too short a test for your truth-sensing faculty to work are, I think, fairly slim.
People tend not to have perfect memories. Even if you are a biblical scholar the odds are that you will make mistakes in this, as you would in anything else, and information gained from the intuitive faculty would be expressed as a lower error rate than like-qualified people.
ETA quote.
Similar test. Not the same test. It was a test that, though still flawed, fixed those two things I could see immediately (and in doing so created other problems).
Want to test this?
I don’t see that it would have fixed those things. We could, perhaps, come up with a more useful test if we discussed it on a less hostile footing. But, at the moment, I’m not getting a whole lot of info out of the exchange and don’t think it worth arguing with you over quite why your test wouldn’t work, since we both agree that it wouldn’t.
Not really. It’s not that sort of thing where the outputs of the test would have much value for me. I could easily get 100% of the quotes correct by sticking them into google, as could you. The only answers we could accept with any significant confidence would be the ones we didn’t think the other person was likely to lie about.
My beliefs in respect to claims about the supernatural are held with a high degree of confidence, and pushing them some tiny distance towards the false end of the spectrum is not worth the hours I would have to invest.
If you can say more about why deliberately induced mystical experiences don’t count, but other kinds do, I’d be interested.
For the same reason that if I had a see-an-image-of-Grandpa button, and pushed it, I wouldn’t count the fact that I saw him as evidence that he’s somehow still alive, but if I saw him right now spontaneously, I would.
Imagine that you have a switch in your home which responds to your touch by turning on a lamp (this probably won’t take much imagination). One day this lamp, which was off, suddenly and for no apparent reason turns on. Would you assign supernatural or mundane causes to this event?
Now this isn’t absolute proof that the switch wasn’t turned on by something otherworldly; perhaps it responds to both mundane and supernatural causes. But, well, if I may be blunt, Occam’s Razor. If your best explanations are “the Hand of Zeus” and “Mittens, my cat,” then …
I assume much the same things about this as any other sense: it’s there to give information about the world, but trickable. I mean, how tired you feel is a good measure of how long it’s been since you’ve slept, but you can drink coffee and end up feeling more energetic than is merited. So if I want to be able to tell how much sleep I really need, I should avoid caffeine. That doesn’t mean the existence of caffeine makes your subjective feelings of your own energy level arbitrary or worthless.
Interestingly, this sounds like the way that I used to view my own spiritual experiences. While I can’t claim to have ever had a full-blown vision, I have had powerful, spontaneous feelings associated with prayer and other internal and external religious stimuli. I assumed that God was trying to tell me something. Later, I started to wonder why I was also having these same powerful feelings at odd times clearly not associated with religious experiences, and in situations where there was no message for me as far as I could tell.
On introspection, I realized that I associated this with God because I’d been taught by people at church to identify this “frisson” with spirituality. At the time, it was the most accessible explanation. But there was no other reason for me to believe that explanation over a natural one. That I was getting data that seemed to contradict the “God’s spirit” hypothesis eventually led to an update.
Unfortunately, the example you’re drawing the analogy to is just as unclear to me as the original example I’d requested an explanation of.
I mean, I agree that seeing an image of my dead grandfather isn’t particularly strong evidence that he’s alive. Indeed, I see images of dead relatives on a fairly regular basis, and I continue to believe that they’re dead. But I think that’s equally true whether I deliberately invoked such an image, or didn’t.
I get that you think it is evidence that he’s alive when the image isn’t deliberately invoked, and I can understand how the reason for that would be the same as the reason for thinking that a mystical experience “counts” when it isn’t deliberately invoked, but I am just as unclear about what that reason is as I was to start with.
If I suddenly saw my dead grandpa standing in front of me, that would be sufficiently surprising that I’d want an explanation. It’s not sufficiently strong to make me believe by itself, but I’d say hello and see if he answered, and if he sounded like my grandpa, and then tell him he looks like someone I know and see the reaction, and if he reacts like Grandpa, I touch him to ascertain that he’s corporeal, then invite him to come chat with me until I wake up, and assuming that everything else seems non-dream-like (I’ll eventually have to read something, providing an opportunity to test whether or not I’m dreaming, plus I can try comparing physics to how they should be, perhaps by trying to fly), I’d tell my mom he’s here.
Whereas if I had such a button, I’d ignore the image, because it wouldn’t be surprising. I suppose looking at photographs is kind of like the button.
Well, wait up. Now you’re comparing two conditions with two variables, rather than one.
That is, not only is grandpa spontaneous in case A and button-initiated in case B, but also grandpa is a convincing corporeal fascimile of your grandpa in case A and not any of those things in case B. I totally get how a convincing fascimile of grandpa would “count” where an unconvincing image wouldn’t (and, by analogy, how a convincing mystical experience would count where an unconvincing one wouldn’t) but that wasn’t the claim you started out making.
Suppose you discovered a button that, when pressed, created something standing in front of you that looked like your dead grandpa , sounded and reacted like your grandpa, chatted with you like you believe your grandpa would, etc. Would you ignore that?
It seems like you’re claiming that you would, because it wouldn’t be surprising… from which I infer that mystical experiences have to be surprising to count (which had been my original question, after all). But I’m not sure I properly understood you.
For my own part, if I’m willing to believe that my dead grandpa can come back to life at all, I can’t see why the existence of a button that does this routinely should make me less willing to believe it .
The issue is that there is not a reliable “see-an-image-of-Grandpa button” in existence for mystical experiences. In other words, I’m unaware of any techniques that reliably induce mystical experiences. Since there are no techniques for reliably inducing mystical experiences, there is no basis for rejecting some examples of mystical experience as “unnatural/artificial mystical experiences.”
As an aside, if you are still interested in evaluating readings, I would be interested in your take on this one
Now you’re aware of one.
Yes: Dervishes.
yes
You’ve stated that you judge morality on a consequentialist basis. Now you state that going to hell is somehow not equivalent to god torturing you for eternity. What gives?
Also: You believe in god because your belief in god implies that you really ought to believe in god? What? Is that circular or recursivly justified? If the latter, please explain.
Hidden cameras help. So do setups like “leave a dollar, take a bagel” left in the office kitchen.
That’s a great idea! Now if only we could randomly assign people to convert to either Wicca or Christianity, we’d be all set. Unfortunately...
It’s not exactly rigorous, but you could try leaving bagels at Christian and Wiccan gatherings of approximately the same size and see how many dollars you get back.
That’s an idea, but you’d need to know how they started out. If generally nice people joined one religion and stayed the same, and generally horrible people joined the other and became better people, they might look the same on the bagel test.
True. You could control for that by seeing if established communities are more or less prone to stealing bagels than younger ones, but that would take a lot more data points.
Indeed. Or you could test the people themselves individually. What if you got a bunch of very new converts to various religions, possibly more than just Christianity and Wicca, and tested them on the bagels and gave them a questionnaire containing some questions about morals and some about their conversion and some decoys to throw them off, then called them back again every year for the same tests, repeating for several years?
I don’t really trust self-evaluation for questions like this, unfortunately—it’s too likely to be confounded by people’s moral self-image, which is exactly the sort of thing I’d expect to be affected by a religious conversion. Bagels would still work, though.
Actually, if I was designing a study like this I think I’d sign a bunch of people up ostensibly for longitudial evaluation on a completely different topic—and leave a basket of bagels in the waiting room.
What about a study ostensibly of the health of people who convert to new religions? Bagels in the waiting room, new converts, random not-too-unpleasant medical tests for no real reason? Repeat yearly?
The moral questionnaire would be interesting because people’s own conscious ethics might reflect something cool and if you’re gonna test it anyway… but on the other hand, yeah. I don’t trust them to evaluate how moral they are, either. But if people signal what they believe is right, then that means you do know what they think is good. You could use that to see a shift from no morals at all to believing morals are right and good to have. And just out of curiosity, I’d like to see if they shifted from deontologist to consequentialist ethics, or vice versa.
Yeah, that all sounds good to me.
People don’t necessarily signal what they think is right; sometimes they signal attitudes they think other people want them to possess. Admittedly, in a homogenous environment that can cause people to eventually endorse what they’ve been signaling.
Hm, you’d probably want the bagels to be off in a small side room so that the patients can feel alone while considering whether or not to steal one.
Yes, definitely. Or in a waiting room. “Oops, sorry, we’re running a little late. Wait here in this deserted waiting room till five minutes from now, bye. :)” Otherwise, they might not see them.
Or perhaps neither Jehovah nor Astarte knows now who will dominate in the end, and any promises either makes to any followers are, ahem, over-confident? :-) There was a line I read somewhere about how all generals tell their troops that their side will be victorious...
So you’re assuming both sides are in a duel, and that the winner will send xyr worshipers to heaven and the loser’s worshipers to hell? Because I was not.
Only Jehovah. He says that he’s going to send his worshipers to heaven and Astarte’s to hell. Astarte says neither Jehovah nor she will send anyone anywhere. Either one could be a liar, or they could be in a duel and each describing what happens if xe wins.
Only as a hypothetical possibility. (From such evidence as I’ve seen I don’t think either really exists. And I have seen a fair number of Wiccan ceremonies—which seem like reasonably decent theater, but that’s all.) One could construe some biblical passages as predicting some sort of duel—and if one believed those passages, and that interpretation, then the question of whether one side was overstating its chances would be relevant.
Maybe I’m lacking context, but I’m not sure why you bring this up. Has anyone here described religious beliefs as being characteristically caused by mental illness? I’d be concerned if they had, since such a statement would be (a) incorrect and (b) stigmatizing.
In this post, Eliezer characterized John C. Wright’s conversion to Catholicism as the result of a temporal lobe epileptic fit and said that at least some (not sure if he meant all) religious experiences were “brain malfunctions.”
Interesting that this post has been downvoted. Care to explain? It seems to me that I am straightforwardly answering a question.
The relevant category is probably not explanations for religious beliefs, but rather explanations of experiences such as AK has reported of what, for lack of a better term, I will call extrasensory perception. Most of the people I know who have religious beliefs don’t report extrasensory perception, and most of the people I know who report extrasensory perception don’t have religious beliefs. (Though of the people I know who do both, a reasonable number ascribe a causal relationship between them. The direction varies.)
You are. That’s the main alternate explanation I can think of.
But, mental illness is not required to experience strong, odd feelings or even to “hear voices”. Fully-functional human brains can easily generate such things.
Religious experience isn’t usually pathologized in the mainstream (academically or by laypeople) unless it makes up part of a larger pattern of experience that’s disruptive to normal life, but that doesn’t say much one way or another about LW’s attitude toward it.
My experience with LW’s attitude has been similar, though owing to a different reason. Religion generally seems to be treated here as the result of cognitive bias, same as any number of other poorly setup beliefs.
Though LW does tend to use the word “insane” in a way that includes any kind of irrational cognition, I so far have interpreted that to mostly be slang, not meant to literally imply that all irrational cognition is mental illness (although the symptoms of many mental illnesses can be seen as a subset of irrational cognition).
Not having certain irrational biases can be said to be a subset of mental illness.
How so? I can only think of Straw Vulcan examples. (Or, by “can be said”, do you mean to imply that you disagree with the statement?)
A subset of those diagnosed or diagnosable with high functioning autism and a subset of the features that constitute that label fit this category. Being rational is not normal.
I don’t affiliate myself with the DSM, nor does it always representative of an optimal way of carving reality. In this case I didn’t want to specify one way or the other.
Things like more accurate self-evaluations by depressed people.
tl;dr for the last two comments (Just to help me understand this; if I misrepresent anyone, please call me out on it.)
Yvain: So you believe in multiple factions of supernatural beings, why do you think Jehovah is the benevolent side? Other gods have done awesomecool stuff too, and Jehovah’s known to do downright evil stuff.
AK: Not multiple factions, just two. As to why I think Jehovah’s the good guy.....
Don’t you think that’s an unjustified nitpick? Absolutely awful people are rare, people who have revelations are rarer, so obviously absolutely awful people who had revelations have to be extremely difficult to find. So it’s not really surprising that two links someone gave you don’t mention a story like that.
But I think you’re assuming that the hallmark of a true religion is that it drastically increases the morality of its adherents. And that’s an assumption you have no grounds for—all that happened in your case was that the needle of your moral compass swerved from ‘absolute scumbag’ to ‘reasonably nice person’. There’s no reason to generalise that and believe that the moral compass of a reasonably nice person would swerve further to ‘absolute saint’.
Anyhow, your testable prediction is ‘converts to false religions won’t show moral improvement’. I doubt there’s any data on stuff like that right now (if there is, my apologies), so we have to rely on anecdotal evidence. The problem with that, of course, is that it’s notoriously unreliable… If it doesn’t show what you want it to show, you can just dismiss it all as lies or outliers or whatever. Doesn’t really answer any questions.
And if you’re willing to consider that kind of anecdotal evidence, why not other kinds of anecdotal evidence that sound just as convincing?
How convenient. When it happens to someone else it’s a lie/delusion/hallucination, when it happens to you it’s a miracle.
And yet.… Back to your premise. Even if your personality changed for the better… How does this show in any way that Jehovah’s a good guy? Surely even an evil daemon has no use for social outcasts with a propensity for random acts of violence; a normal person would probably serve them better. And how do you answer Yvain’s point about all the evil Jehovah has done? How do you know he’s the good guy
....
Everyone else: Why are we playing the “let’s assume everything you say is true” game anyway? Surely it’d be more honest to try and establish that his mystical experiences were all hallucinations?
We’ll have to ask how God and Santa Claus manage to pull it off.
I prefer TheOtherDave’s idea. Unlike God, we’re not omniscient or capable of reading minds. And unlike Santa Claus, we exist.
Well, now that you mention it… I infer that if you read someone’s user page and got sensation A or B off of it, you would consider that evidence about the user’s morality. Yes? No?
Yes. But it would be more credible to other people, and make for a publishable study, if we used some other measure. It’d also be more certain that we’d actually get information.
Indeed, non-omniscience and fictitious nature seem like huge flaws in my plan.
Obviously I can’t speak for AK, but maybe she believes that she has been epistemically lucky. Compare the religious case:
“I had this experience which gave me evidence for divinity X, so I am going to believe in X. Others have had analogous experiences for divinities Y and Z, but according to the X religion I adopted those are demonic, so Y and Z believers are wrong. I was lucky though, since if I had had a Y experience I would have become a Y believer”.
with philosophical cases like the ones Alicorn discusses there:
“I accept philosophical position X because of compelling arguments I have been exposed to. Others have been exposed to seemingly compelling arguments for positions Y and Z, but according to X these arguments are flawed, so Y and Z believers are wrong. I was lucky though, since if I had gone to a university with Y teachers I would have become a Y believer”.
It may be that the philosopher is also being irrational here and that she could strive more to trascend her education and assess X vs Y impartially, but in the end it is impossible to escape this kind of irrationality at all levels at once and assess beliefs from a perfect vaccuum. We all find some things compelling and not others because of the kind of people we are and the kind of lives we have lived, and the best we can get is reflective equilibrium. Recursive justification hitting bottom and all that.
The question is whether AK is already in reflective equilibrium or if she can still profit from some meta-examination and reassess this part of her belief system. (I believe that some religious believers have reflected enough about their beliefs and the counterarguments to them that they are in this kind of equilibrium and there is no further argument from an atheist that can rationally move them—though these are a minority and not representative of typical religious folks.)
See my response here—if Alicorn is saying she knows the other side has arguments exactly as convincing as those which led her to her side, but she is still justified to continue believing her side more likely than the other, I disagree with her.
You’re doing it wrong. The power of the Litany comes from evidence. Every time you applying the Litany of Gendlin to an unsubstantiated assertion, a fairie drops dead.
I think this is a joke, ish, right? Because it’s quite witty. /tangent
I mentioned some evidence elsewhere in the thread.
“Ish,” yes. I have to admit I’ve had a hard time navigating this enormous thread, and haven’t read all of it, including the evidence of demonic influence you’re referring to. However, I predict in advance that 1) this evidence is based on words that a man wrote in an ancient book, and that 2) I will find this evidence dubious.
Two equally unlikely propositions should require equally strong evidence to be believed. Neither dragons nor demons exist, yet you assert that demons are real. Where, then, is the chain of entangled events leading from the state of the universe to the state of your mind? Honest truth-seeking is about dispassionately scrutinizing that chain, as an outsider would, and allowing others to scrutinize, evaluate, and verify it.
I was a Mormon missionary at 19. I used to give people copies of the Book of Mormon, testify of my conviction that it was true, and invite them to read it and pray about it. A few did (Most people in Iowa and Illinois aren’t particularly vulnerable to Mormonism). A few of those people eventually (usually after meeting with us several times) came to feel as I did, that the book was true. I told those people that the feeling they felt was the Holy Spirit, manifesting the truth to them. And if that book is true, I told them, then Joseph Smith must have been a true prophet. And as a true prophet, the church that he established must be the Only True Church, according to Joseph’s revelations and teachings. I would then invite them to be baptized (which was the most important metric in the mission), and to become a member of the LDS church. One of the church’s teachings is that a person can become as God after death (omniscience and omnipotence included). Did the chain of reasoning leading from “I have a feeling that this book is true” justify the belief that “I can become like God”?
You are intelligent and capable of making good rhetorical arguments (from what I have read of your posts in the last week or two). I see you wielding Gendlin, for example, in support of your views. At some level, you’re getting it. But the point of Gendlin is to encourage truth-seekers desiring to cast off comforting false beliefs. It works properly only if you are also willing to invoke Tarski:
Let me not become attached to beliefs I may not want.
Upvoted for being a completely reasonable comment given that you haven’t read through the entirety of a thread that’s gotten totally monstrous.
Only partly right.
Of course you will. If I told you that God himself appeared to me personally and told me everything in the Bible was true, you’d find that dubious, too. Perhaps even more dubious.
Already partly in other posts on this thread (actually largely in other posts on this thread), buried somewhere, among something. You’ll forgive me for not wanting to retype multiple pages, I hope.
Certainly. I’m now curious though: if I told you that God appeared to me personally and told me everything in the Bible was true (either for some specific meaning of “the Bible,” which is of course an ambiguous phrase, or leaving it not further specified), roughly how much confidence would you have that I was telling you the truth?
It would depend on how you said it—as a joke, or as an explanation for why you suddenly believed in God and had decided to convert to Christianity, or as a puzzling experience that you were trying to figure out, or something else—and whether it was April 1 or not, and what you meant by “the Bible” (whether you specified it or not), and how you described God and the vision and your plans for the future.
But I’d take it with a grain of salt. I’d probably investigate further and continue correspondence with you for some time, both to help you as well as I could and to ascertain with more certainty the source of your belief that God came to you (whether he really did or it was a drug-induced hallucination or something). It would not be something I’d bet on either way, at least not just from hearing it said.
Ah, apologies if I’ve completely missed the point (which is entirely possible).
No, but generally, applying a derogatory epithet to an entire group of people is seen as rude, unless you back it up with evidence, which in this case you did not do. You just stated it.
In his afterword, EY seems to be saying that the benign actions of his friends and family are inconsistent with the malicious actions of YHVH, as he is depicted in Exodus. This is different from flat-out stating, “all theists are evil” and leaving it at that. EY is offering evidence for his position, and he is also giving credit to theists for being good people despite their religion (as he sees it).
I can’t speak for “you guys”, only for myself; and I personally don’t think that your beliefs are particularly offensive, just the manner in which you’re stating them. It’s kind of like the difference between saying, “Christianity is wrong because Jesus is a fairytale and all Christians are idiots for believing it”, versus, “I believe that Christians are mistaken because of reasons X, Y and Z”.
Well, personally, I believe its wrong because no gods or demons of any kind exist.
Wiccans, on the other hand, would probably tell you that you’re wrong because Wicca had made them better people, who are more loving, selfless, and considerate of others, which is inconsistent with the expected result of worshiping evil demons. I can’t speak for all Wiccans, obviously; this is just what I’d personally heard some Wiccans say.
I object to the use of social politics to overwhelm assertions of fact. Christians and Wiccan’s obviously find each other offensive rather frequently. Both groups (particularly the former) probably also find me offensive. In all cases I say that is their problem.
Now if the Christians were burning the witches I might consider it appropriate to intervene forcefully...
Incidentally I wouldn’t have objected if you responded to “They consort with demons” with “What a load of bullshit. Get a clue!”
I was really objecting to the unsupported assertion; I wouldn’t have minded if AK said, “they consort with demons, and here’s the evidence”.
Well, I personally do fully endorse that statement, but the existence of gods and demons is a matter of faith, or of personal experience, and thus whatever evidence or reason I can bring to bear in support of my statement is bound to be unpersuasive.
Off-topic nitpick: I like to be called a demon-worshiper.
You’re a demon-worshipper!
Oh the innuendo. At this point in the thread, I guess the only way to make the depravity more exquisite would be if you said you enjoy being called a demon’s consort. 0_0
Would a constructor of asynchronous process-level parallel structures be a daemon wrangler?
Fair enough :-)
Well if the entities Wiccans worship actually did exist rather than in a lame memetic or trick of psychology way… it is very unlikely they would be benign. Same could be said of many other religions.
Okay, I’ll bite. On what basis do you conclude that the entities that modern day wiccans worship are demonic, rather than simply imaginary?
Because the religion is evil rather than misguided. Whereas, say, Hinduism, for instance, is just really misguided. See other conversation. Also see Exodus 22:18 and Deuteronomy 18:10.
(I wish I had predicted that this would end this way before I answered that post… then I might not have done so.)
OK, last one from me, if you’re still up for it.
There is nothing that you can claim, nothing that you can demand, nothing that you can take. And as soon as you try to take something as if it were your own—you lose your [innocence]. The angel with the flaming sword stands armed against all selfhood that is small and particular, against the “I” that can say “I want...” “I need...” “I demand...” No individual enters Paradise, only the integrity of the Person.
Only the greatest humility can give us the instinctive delicacy and caution that will prevent us from reaching out for pleasures and satisfactions that we can understand and savor in this darkness. The moment we demand anything for ourselves or even trust in any action of our own to procure a deeper intensification of this pure and serene rest in [God], we defile and dissipate the perfect gift that [He] desires to communicate to us in the silence and repose of our own powers.
If there is one thing we must do it is this: we must realize to the very depths of our being that this is a pure gift of [God] which no desire, no effort and no heroism of ours can do anything to deserve or obtain. There is nothing we can do directly either to procure it or to preserve it or to increase it. Our own activity is for the most part an obstacle to the infusion of this peaceful and pacifying light, with the exception that [God] may demand certain acts and works of us by charity or obedience, and maintain us in deep experimental union with [Him] through them all, by [His] own good pleasure, not by any fidelity of ours.
At best we can dispose ourselves for the reception of this great gift by resting in the heart of our own poverty, keeping our soul as far as possible empty of desires for all the things that please and preoccupy our nature, no matter how pure or sublime they may be in themselves.
And when [God] reveals [Himself] to us in contemplation we must accept [Him] as [He] comes to us, in [His] own obscurity, in [His] own silence, not interrupting [Him] with arguments or words, conceptions or activities that belong to the level of our own tedious and labored existence.
We must respond to [God]’s gifts gladly and freely with thanksgiving, happiness and joy; but in contemplation we thank [Him] less by words than by the serene happiness of silent acceptance. … It is our emptiness in the presence of the abyss of [His] reality, our silence in the presence of [His] infinitely rich silence, our joy in the bosom of the serene darkness in which [His] light holds us absorbed, it is all this that praises [Him]. It is this that causes love of [God] and wonder and adoration to swim up into us like tidal waves out of the depths of that peace, and break upon the shores of our consciousness in a vast, hushed surf of inarticulate praise, praise and glory!
(I might fail to communicate clearly with this comment; if so, my apologies, it’s not purposeful. E.g. normally if I said “Thomistic metaphysical God” I would assume the reader either knew what I meant (were willing to Google “Thomism”, say) or wasn’t worth talking to. I’ll try not to do that kind of thing in this comment as badly as I normally do. I’m also honestly somewhat confused about a lot of Catholic doctrine and so my comment will likely be confused as a result. To make things worse I only feel as if I’m thinking clearly if I can think about things in terms of theoretical computer science, particularly algorithmic probability theory; unfortunately not only is it difficult to translate ideas into those conceptual schemes, those conceptual schemes are themselves flawed (e.g. due to possibilities of hypercomputation and fundamental problems with probability that’ve been unearthed by decision theory). So again, my apologies if the following is unclear.)
I’m going to accept your interpretation at face value, i.e. accept that you’re blessed with a supernatural charisma or something like that. That said, I’m not yet sure I buy the idea that the Thomistic metaphysical God, the sole optimal decision theory, the Form of the Good, the Logos-y thing, has much to do with transhumanly intelligent angels and demons of roughly the sort that folk around here would call superintelligences. (I haven’t yet read the literature on that subject.) In my current state of knowledge if I was getting supernatural signals (which I do, but not as regularly as you do) then I would treat them the same way I’d treat a source of information that claimed to be Chaitin’s constant: skeptically.
In fact it might not be a surface-level analogy to say that God is Chaitin’s omega (and is thus a Turing oracle), for they would seem to share a surprising number of properties. Of course Chaitin’s constant isn’t computable, so there’s no algorithmic way to check if the signals you’re getting come from God or from a demon that wants you to think it’s God (at least for claimed bits of Chaitin’s omega that you don’t already know). I believe the Christians have various arguments about states of mind that protect you from demonic influences like that; I haven’t read this article on infallibility yet but I suspect it’s informative.
Because there doesn’t seem to be an algorithmic way of checking if God is really God rather than any other agent that has more bits of Chaitin’s constant than you do, you’re left in a situation where you have to have what is called faith, I think. (I do not understand Aquinas’s arguments about faith yet; I’m not entirely sure I know what it is. I find the ideas counter-intuitive.) I believe that Catholics and maybe other Christians say that conscience is something like a gift from God and that you can trust it, so if your conscience objects to the signals you’re getting then that at least a red flag that you might be being influenced by self-delusion or demons or what have you. But this “conscience” thing seems to be algorithmic in nature (though that’s admittedly quite a contentious point), so if it can check the truth value of the moral information you’re getting supernaturally then you already had those bits of Chaitin’s constant. If your conscience doesn’t say anything about it then it would seem you’re dealing with a situation where you’re supposed/have to have faith. That’s the only way you can do better than an algorithmic approach.
Note that part of the reasons that I think about these things is ’cuz I want my FAI to be able to use bits of Chaitin’s constant that it finds in its environment so as to do uncomputable things it otherwise wouldn’t have. It is an extension of this same personal problem of what to do with information whose origin you can’t algorithmicly verify.
Anyway it’s a sort of awkward situation to be in. It seems natural to assume that this agent is God but I’m not sure if that is acceptable by the standard of (Kant’s weirdly naive version of) Kan’t categorical imperative. I notice that I am very confused about counterfactual states of knowledge and various other things that make thinking about this very difficult.
So um, how do you approach the problem? Er did I even describe the problem in such a way that it’s understandable?
I don’t think I’m smart enough to follow this comment. Edit: but I think you’re wrong about me having some sort of supernatural charisma… I’m pretty sure I haven’t said I’m special, because if I did, I’d be wrong.
Hm, so how would you describe the mechanism behind your sensations then? (Sorry, I’d been primed to interpret your description in light of similar things I’d seen before which I would describe as “supernatural” for lack of a better word.)
...I wasn’t going to come back to say anything, but fine. I’d say it’s God’s doing. Not my own specialness. And I’m not going to continue this conversation further.
Okay, thanks. I didn’t mean to imply ’twas your own “specialness” as such; apologies for being unclear. ETA: Also I’m sorry for anything else? I get the impression I did/said something wrong. So yeah, sorry.
FWIW, apparently (per Wikipedia) the word “charism” “denotes any good gift that flows from God’s love to man.”
The dirt just sits there? It doesn’t also squeeze your skin? Or instead throb as if it had been squeezed for a while, but uniformly, not with a tourniquet, and was just released?
Just sits there. Anyway, dirt is a bad metaphor.
Oh and also you should definitely look into using this to help build/invoke FAI/God. E.g. my prospective team has a slot open which you might be perfect for. I’m currently affiliated with Leverage Research who recently received a large donation from Jaan Tallinn, who also supports the Singularity Institute.
I’m not convinced that this is an accurate perception of AspiringKnitter’s comments here so far.
E.g., I don’t think she’s yet claimed both omnipotence and omnibenevolence as attributes of god, so you may be criticizing views she doesn’t hold. If there’s a comment I missed, then ignore me. :)
But at a minimum, I think you misunderstood what she was asking by, “Do you mean that I can’t consider his nonexistence as a counterfactual?” She was asking, by my reading, if you thought she had displayed an actual incapability of thinking that thought.
.
If you’re granted “fictional”, then no. But if you don’t believe in unicorns, you’d better mean “magical horse with a horn” and not “narwhal” or “rhinoceros”.
For what it’s worth, the downvotes appear to be correlated with anyone discussing theology. Not directed at you in particular. At least, that’s my impression.
You do realize it might very well mean death to your Bayes score to say or think things like that around an omnipotent being who has a sense of humor, right? This is the sort of Dude Who wrestles with a mortal then names a nation to honor the match just to taunt future wannabe-Platonist Jews about how totally crazy their God is. He is perfectly capable of engineering some lucky socks just so He can make fun of you about it later. He’s that type of Guy. And you do realize that the generalization of Bayes score to decision theoretic contexts with objective morality is actually a direct measure of sinfulness? And that the only reason you’re getting off the hook is that Jesus allegedly managed to have a generalized Bayes score of zero despite being unable to tell a live fig tree from a dead one at a moderate distance and getting all pissed off about it for no immediately discernible reason? Just sayin’, count your blessings.
Yes, of course. Why he’d do that, instead of all the other things he could be doing, like creating a lucky hat or sending a prophet to explain the difference between “please don’t be an idiot and quibble over whether it might hurt my feelings if you tell me the truth” and “please be as insulting as possible in your dealings with me”.
No, largely because I have no idea what that would even mean. However, if you mean that using good epistemic hygiene is a sin because there’s objective morality, or if you think the objective morality only applies in certain situations which require special epistemology to handle, you’re wrong.
It’s just that now “lucky socks” is the local Schelling point. It’s possible I don’t understand God very well, but I personally am modally afraid of jinxing stuff or setting myself up for dramatic irony. It has to do with how my personal history’s played out. I was mostly just using the socks thing as an example of this larger problem of how epistemology gets harder when there’s a very powerful entity around. I know I have a really hard time predicting the future because I’m used to… “miracles” occurring and helping me out, but I don’t want to take them for granted, but I want to make accurate predictions… And so on. Maybe I’m over-complicating things.
Okay, I can understand that. It can be annoying. However, the standard framework does still apply; you can still use Bayes. It’s like anything else confusing you.
I see what you’re saying and it’s a sensible approximation but I’m not actually sure you can use Bayes in situations with “mutual simulation” like that. Are you familiar with updateless/ambient decision theory perchance?
No, I’m not. Should I be? Do you have a link to offer?
This post combined with all the comments is perhaps the best place to start, or this post might be an easier introduction to the sorts of problems that Bayes has trouble with. This is the LW wiki hub for decision theory. That said it would take me awhile to explain why I think it’d particularly interest you and how it’s related to things like lucky socks, especially as a lot of the most interesting ideas are still highly speculative. I’d like to write such an explanation at some point but can’t at the moment.
Welcome to Less Wrong ! Heh heh.
...and they can say exactly the same thing about you. It’s exactly that symmetry that defines No True Scotsman. You think you are reading and applying the text correctly, they think they are. It doesn’t help to insist that you’re really right and they’re really wrong, because they can do the same thing.
No, No True Scotsman is characterized by moveable goalposts. If you actually do have a definition of True Scotsman that you can point to and won’t change, then you’re not going to fall under this fallacy.
Okay, I’m confused here. Do you believe there are potentially correct and incorrect answers to the question “what does the Bible say that Jesus taught while alive?”
IMO, most Christians unconsciously concentrate on the passages that match their preconceptions, and ignore or explain away the rest. This behavior is ridiculously easy to notice in others, and equally difficult to notice in oneself.
For example, I expect you to ignore or explain away Matthew 10:34: “Do not think that I have come to bring peace to the earth. I have not come to bring peace, but a sword.”
I expect you find Mark 11:12-14 rather bewildering: “On the following day, when they came from Bethany, he was hungry. And seeing in the distance a fig tree in leaf, he went to see if he could find anything on it. When he came to it, he found nothing but leaves, for it was not the season for figs. And he said to it, “May no one ever eat fruit from you again.””
I still think Luke 14:26 has a moderately good explanation behind it, but there’s also a good chance that this is a verse I’m still explaining away, even though I’m not a Christian any more and don’t need to: “If anyone comes to me and does not hate his own father and mother and wife and children and brothers and sisters, yes, and even his own life, he cannot be my disciple.”
The bible was authored by different individuals over the course of time. That’s pretty well established. Those individuals had different motives and goals. IMO, this causes there to actually be competing strains of thought in the bible. People pick out the strains of thought that speak to their preconceived notions. For one last example, I expect you’ll explain James in light of Ephesians, arguing that grace is the main theme. But I think it’s equally valid for someone to explain Ephesians in light of James, arguing that changed behavior is the main theme. These are both valid approaches, in my mind, because contrary to the expectations of Christians (who believe that deep down, James and Ephesians must be saying the same thing), James and Ephesians are actually opposing view points.
Finally, I’ll answer your question: probably not. Not every collection of words has an objective meaning. Restricting yourself to the gospels helps a lot, but I still think they are ambiguous enough to support multiple interpretations.
That isn’t a tacked on addition. It’s the core principle of the entire faith!
Well, lavalamp_2008 vigorously agrees with you, anyway...
The way I see it, there appear to be enough contradictions and ambiguities in the Bible and associated fan work that it’s possible to use it to justify almost anything. (Including slavery.) So it’s hard to tell a priori what’s un-Christian and what isn’t.
Against a Biblical literalist, this would probably be a pretty good attack—if you think a plausible implication of a single verse in the Bible, taken out of context, is an absolute moral justification for a proposed action, then, yes, you can justify pretty much any behavior.
However, this does not seem to be the thrust of AspiringKnitter’s point, nor, even if it were, should we be content to argue against such a rhetorically weak position.
Rather, I think AspiringKnitter is arguing that certain emotions, attitudes, dispositions, etc. are repeated often enough and forcefully enough in the Bible so as to carve out an identifiable cluster in thing-space. A kind, gentle, equalitarian pacifist is (among other things) acting more consistently with the teachings of the literary character of Jesus than a judgmental, aggressive, elitist warrior. Assessing whether someone is acting consistently with the literary character of Jesus’s teachings is an inherently subjective enterprise, but that doesn’t mean that all opinions on the subject are equally valid—there is some content there.
You have a good point there.
Then again, there are plenty of times that Jesus says things to the effect of “Repent sinners, because the end is coming, and God and I are gonna kick your ass if you don’t!”
-- Sam Harris
Sacrifice other people’s wives to the devil. That’s almost certainly out.
Yes, that’s a significant moral absurdity to us but no a big deal to the cultures who created the religion or to the texts themselves. (Fairly ambivalent—mostly just supports following whatever is the status quo on the subject.)
No, it’s really not. There is plenty of grey but there are a whole lot of clear cut rules too. Murdering. Stealing. Grabbing guys by the testicles when they are fighting. All sorts of things.
Your comment seems to be about a general trend and doesn’t rest on slavery itself, correct?
Because if not, I just want to point out that the Bible never says “slavery is good”. It regulates it, ensuring minimal rights for slaves, and assumes it will happen, which is kind of like the rationale behind legalizing drugs. Slaves are commanded in the New Testament to obey their masters, which those telling them to do so explain as being so that the faith doesn’t get a bad reputation. The only time anyone’s told to practice slavery is as punishment for a crime, which is surely no worse than incarceration. At least you’re getting some extra work done.
I assume this doesn’t change your mind because you have other examples in mind?
One thing that struck me about the Bible when I first read it was that Jesus never flat-out said, “look guys, owning people is wrong, don’t do it”. Instead, he (as you pointed out) treats slavery as a basic fact of life, sort of like breathing or language or agriculture. There are a lot of parables in the New Testament which use slavery as a plot device, or as an analogy to illustrate a point, but none that imagine a world without it.
Contrast this to the modern world we live in. To most of us, slavery is almost unthinkable, and we condemn it whenever we see it. As imperfect as we are, we’ve come a long way in the past 2000 years—all of us, even Christians. That’s something to be proud of, IMO.
Hrm, I support legalizing-and-regulating (at least some) drugs and am not in favor of legalizing-and-regulating slavery. I just thought about it for 5 minutes and I still really don’t think they are analogous.
Deciding factor: sane, controlled drug use does not harm anyone (with the possible exception of the user, but they do so willingly). “sane, controlled” slavery would still harm someone against their will (with the exception of voluntary BDSM type relationships, but I’m pretty sure that’s not what we’re talking about).
Do you support legalizing and regulating the imprisonment of people against their will?
Haha, I did think of that before making my last comment :)
Answer: in cases where said people are likely to harm others, yes. IMO, society gains more utilons from incarcerating them than the individuals lose from being incarcerated. Otherwise, I’d much rather see more constructive forms of punishment.
OK. So, consider a proposal to force prisoners to perform involuntary labor, in such a way that society gains more utilons from that labor than the individuals lose from being forced to perform it.
Would you support that proposal?
Would you label that proposal “slavery”?
If not (to either or both), why not?
It would probably depend on the specific proposal. I’d lean more towards “no” the more involuntary and demeaning the task. (I’m not certain my values are consistent here; I haven’t put huge amounts of thought into it.)
Not in the sense I thought we were talking about, which (at least in my mind) included the concept of one individual “owning” another. In a more general sense, I guess yes.
Well, for my own part I would consider a system of involuntary forced labor as good an example of slavery as I can think of… to be told “yes, you have to work at what I tell you to work at, and you have no choice in the matter, but at least I don’t own you” would be bewildering.
That said, I don’t care about the semantics very much. But if the deciding factor in your opposition to legalizing and regulating slavery is that slavery harms someone against their will, then it seems strange to me that who owns whom is relevant here. Is ownership in and of itself a form of harm?
Tabooing “slavery”: “You committed crimes and society has deemed that you will perform task X for Y years as a repayment” seems significantly different (to me) from “You were kidnapped from country Z, sold to plantation owner W and must perform task X for the rest of your life”. I can see arguments for and against the former, but the latter is just plain evil.
This actually understates the degree of difference. Chattel slavery isn’t simply about involuntary labor. It also involves, for example, lacking the autonomy to marry without the consent of one’s master, the arbitrary separation of families and the selling of slaves’ children, etc.
Sure, I agree. But unless the latter is what’s being referred to Biblically, we do seem to have shifted the topic of conversation somewhere along the line.
It’s been awhile since I read it last, but IIRC, the laws regarding slavery in the OT cover individuals captured in a war as well as those sold into slavery to pay a debt.
That’s consistent with my recollection as well.
Does each and every feature of slavery need to contribute to it’s awfulness?
Certainly not.
In fact, often taking slaves is outright sinful. (Because you’re supposed to genocide them instead! :P)
That’s certainly the Old Testament position (i.e. the Amalekites). But I don’t that it’s fair to say that is an inherent part of Christian thought.
I don’t think “take slaves as punishment” is inherent Christian thought either.
I would confirm this with a particular emphasis on schizophrenia. Actually not quite—as I understand it there is a negative correlation.
Is this a “Catholics aren’t Christian” thing, or just drawing attention to the point that not all Christians are Catholic?
The latter.
Alright. I’ve encountered some people of the former opinion, and while it really didn’t square with the impression you’ve given thusfar I would have been interested to see your reasoning if you’d in fact held that view.
Hmm, so apparently, looking up religious conversion testimonies on the intertubes is more difficult than I thought, because all the top search results lead to sites that basically say, “here’s why religion X is wrong and my own religion Y is the best thing since sliced bread”. That said, here’s a random compilation of Chrtistianity-to-Islam conversion testimonials. You can also check out the daily “Why am I an Atheist” feature on Pharyngula, but be advised that this site is quite a bit more angry than Less Wrong, so the posts may not be representative.
BTW, I’m not endorsing any of these testimonials, I’m just pointing out that they do exist.
Well, I brought that up because I know of at least one mental illness-related violent incident in my own extended family. That said, you are probably right in saying that schizophrenia and violence are not strongly correlated. However, note that violence against others was just one of the negative effects I’d brought up; existential risk to one’s self was another.
I think they key disagreement we’re having is along the following lines: is it better to believe in something that’s true, or in something that’s probably false, but has a positive effect on you as a person ? I believe that the second choice will actually result in a lower utility. Am I correct in thinking that you disagree ? If so, I can elaborate on my position.
I don’t think there are many people (outside of upper management, maybe, heh), of any religious denomination or lack thereof, who wake up every morning and say to themselves, “man, I really want to fulfill some selfish desires today, and other people can go suck it”. Though, in a trivial sense, I suppose that one can interpret wanting to be nice to people as a selfish desire, as well...
You keep asserting things like this, but to an atheist, or an adherent of any faith other than yours, these assertions are pretty close to null statements—unless you can back them up with some evidence that is independent of faith.
Every single person (plus or minus epsilon) who calls oneself “Christian” claims to “follow Jesus’s teachings”; but all Christians disagree on what “following Jesus’s teachings” actually means, so your test is not objective. All those Christians who want to persecute gay people, ban abortion, teach Creationism in schools, or even merely follow the Pope and venerate Mary—all of them believe that they are doing what Jesus would’ve wanted them to do, and they can quote Bible verses to prove it.
Some Christians claim that this story is a later addition to the Bible and therefore non-authoritative. I should also mention that both YHVH and, to a lesser extent, Jesus, did some pretty intolerant things; such as committing wholesale genocide, whipping people, condemning people, authorizing slavery, etc. The Bible is quite a large book...
Thank you.
I’m sorry.
No, I don’t think that’s true, because it’s better to believe what’s true.
So do I, because of the utility I assign to being right.
No.
Suppose, hypothetically, that current LessWrong trends of adding rituals and treating EY as to some extent above others continue. And then suppose that decades or centuries down the line, we haven’t got transhumanism, but we HAVE got LessWrongians who now argue about what EY really meant. And some of them disagree with each other, and others outside their community just raise their eyebrows and think man, LessWrongians are such a weird cult. Would it be correct, at least, to say that there’s a correct answer to the question “who is following Eliezer Yudkowsky’s teachings?” That there’s a yes or no answer to the question “did EY advocate prisons just because he failed to speak out against them?” Or to the question “would he have disapproved of people being irrational?” If not, I’ll admit you’re being self-consistent, at least.
And that claim should be settled by studying the relevant history.
EDIT: oh, and I forgot to mention that one doesn’t have to actually think “I want to go around fulfilling my selfish desires” so much as just have a utility function that values only one’s own comfort and not other people’s.
This statement appears to contradict your earlier statements that
a). It’s better to live with the perception-altering symptoms of schizophrenia, than to replace those symptoms with depression and other side-effects, and
b). You determine the nature of every “gut feeling” (i.e., whether it is divine or internal) by using multiple criteria, one of which is, “would I be better off as a person if this feeling was, in fact, divine”.
I hope not, I think people are engaging in more than enough EY-worship as it is, but that’s beside the point...
Since we know today that EY actually existed, and what he talked about, then yes. However, this won’t be terribly relevant in the distant future, for several reasons:
Even though everyone would have an answer to this question, it is far from guaranteed that more than zero answers would be correct, because it’s entirely possible that no Yudkowskian sect would have the right answer.
Our descendants likely won’t have access to EY’s original texts, but to Swahili translations from garbled Chinese transcriptions, or something; it’s possible that the translations would reflect the translators’ preferences more than EY’s original intent. In this case, EY’s original teachings would be rendered effectively inaccessible, and thus the question would become unanswerable.
Unlike us here in the past, our future descendants won’t have any direct evidence of EY’s existence. They may have so little evidence, in fact, that they may be entirely justified in concluding that EY was a fictional character, like James Bond or Harry Potter. I’m not sure if fictional characters can have “teachings” or not.
This question is not analogous, because, unlike the characters on the OT and NT, EY does not make a habit of frequently using prisons as the basis for his parables, nor does EY claim to be any kind of a moral authority. That said, if EY did say these things, and if prisons were found to be extremely immoral in the future—then our descendants would be entirely justified in saying that EY’s morality was far inferior to their own.
I doubt whether there exist any reasonably fresh first-hand accounts of Jesus’s daily life (assuming, of course, that Jesus existed at all). If such accounts did exist, they did not survive the millennia that passed since then. Thus, it would be very difficult to determine what Jesus did and did not do—especially given the fact that we don’t have enough secular evidence to even conclude that he existed with any kind of certainty.
I want to say I don’t know why you think I made that statement, but I do know, and it’s because you don’t understand what I said. I said that given that those drugs fix the psychosis less than half the time, that almost ten percent of cases spontaneously recover anyway, that the entire rest of the utility function might take overwhelming amounts of disutility from side-effects including permanent disfiguring tics, a type of unfixable restlessness that isn’t helped by fidgeting and usually causes great suffering, greater risk of diseases, lack of caring about anything, mental fog (which will definitely impair your ability to find the truth), and psychosis (not even kidding, that’s one of the side-effects of antipsychotics), and given that it can lead to a curtailing of one’s civil liberties to be diagnosed, it might not be worth it. Look, there’s this moral theory called utilitarianism where you can have one bad thing happen and still think it’s worth it because the alternative is worse, and it doesn’t just have to work for morals. It works for anything; you can’t just say “X is bad, fix X at all cost”. You have to be sure it’s not actually the best state of affairs first. Something can be both appalling and the best possible choice, and my utility function isn’t as simple as you seem to think it is. I think there are things of value besides just having perfectly clear perception.
This is the internet. Nothing anyone says on the internet is ever going away, even if some of us really wish it could. /nitpick
I really want to throw up my hands here and say “but I’ve explained this MULTIPLE TIMES, you are BEING AN IDIOT” but I remember the illusion of transparency. And that you haven’t understood. And that you didn’t make a deliberate decision to annoy me. But I’m still annoyed. I STILL want to call you an idiot, even though I know I haven’t phrased something correctly and I should explain again. That doesn’t even sound like what I believe or what I (thought I) said. (Maybe that’s how it came out. Ugh.)
Why is communication so difficult? Why doesn’t knowing that someone’s not doing it on purpose matter? It’s the sort of thing that you’d think would actually affect my feelings.
You would be surprised… If it weren’t for the internet archive much information would have already been lost. Some modern websites are starting to use web design techniques (ajax-loaded content) that break such archive services.
One option would be to reply with a pointer to your previous comment. I see you’ve used the link syntax within a comment—this web site supports permalinks to comments as well. At least you wouldn’t be forced to repeat yourself.
But since I obviously explained it wrong, what good does it do to remind him of where I explained it? I’ve used the wrong words, I need to find new ones. Ugh.
Best wishes. Was your previous explanation earlier in your interchange with Bugmaster? If so, I agree that Bugmaster would have read your explanation, and that pointing to it wouldn’t help (I sympathize). If, however, your previous explanation was in response to another lesswrongian, it is possible that Bugmaster missed it, in which case a pointer might help. I’ve been following your comments, but I’m sure I’ve missed some of them.
Or, perhaps, a link and a clarification.
It’s conceivable that English could drift enough that EY’s meaning would be unclear even if the texts remain.
(I just came back from vacation, sorry for the late reply, and happy New Year ! Also, Merry Christmas if you are so inclined :-) )
Firstly, I operate by Crocker’s Rules, so you can call me anything you want and I won’t mind.
I agree with you completely regarding utilitarianism (although in this case we’re not talking about the moral theory, just the approach in general). All I was saying is that IMO the utility one places on believing things that are likely to be actually true should, IMO, be extremely high—and possibly higher than the utility you assign to this feature. But “extremely high” does not mean “infinite”, of course, and it’s entirely possible that, in some cases, the disutility from all the side-effects will not be worth the utility gain—especially if the side-effects are preventing you from believing true things anyway (f.ex. “mental fog”, psychosis, depression, etc.).
That said, if I personally was seeing visions or hearing voices, I would be willing (assuming I remained reasonably rational, of course) to risk a very large disutility even for a less than 50% chance of fixing the problem. If I can’t trust my senses (or, indeed, my thoughts), then my ability to correctly evaluate my utility is greatly diminished. I could be thinking that everything is just great, while in reality I was hurting myself or others, and I’d be none the wiser. Of course, I could also be just great in reality, as well; but given the way this universe works, this is unlikely.
Data on the Internet is less permanent than many people think, IMO, but this is probably beside the point; I was making an analogy to the Bible, which was written in the days before the Internet, but (sadly) after the days of giant stone steles. Besides, the way things are going, it’s not out of the question that future versions of the Internet would all be written in Chinese...
I think this is because you possess religious faith, which I have never experienced, and thus I am unable to evaluate what you say in the same frame of reference. Or it could be because I’m just obtuse. Or a bit of both.
I don’t think so. The popularity of the English language has gained momentum such that even if its original causes (the economic status of the US) ceased, it would go on for quite a while. Chinese hasn’t. See http://www.andaman.org/BOOK/reprints/weber/rep-weber.htm (It was written a decade and a half ago, but I don’t think the situation is significantly qualitatively different for English and Chinese in ways which couldn’t have been predicted back then.) I think English is going to remain the main international language for at least 30 more years, unless some major catastrophe occurs (where by major I mean ‘killing at least 5% of the world human population’).
There is a bit of ambiguity here, but I asked after it and apparently the more strident interpretation was not intended. The position that the Pope doesn’t determine who is Christian because the Pope is Catholic and therefore doesn’t speak with authority regarding those Christians who are not Catholic seems uncontroversial, internally consistent, and not privileging any particular view.
Ok, that makes more sense, thanks. My apologies (again :-( ) to AK for misreading her point.
I think that by “maximizes average utility” AspiringKnitter meant utility averaged over every human being—so helpfulness and kindness to others is by necessity included.
Since a utility function is only defined up to affine transformations with positive scale factor, what does it mean to sum several utility functions together? (Sure someone has already thought about that, but I can’t think of anything sensible.)
Yeah, that’s a problem with many formulations of utilitarianism.
Surely someone must have proposed some solution(s)?
Weight it by net-worth?
OIC, that would make more sense than what I was thinking; my apologies to AspiringKnitter if I got this wrong.
Misery is a subjective experience. The schizophrenic patients I work with describe feeling a lot of distress because of their symptoms, and their voices usually tell them frightening things. So I would expect a person hearing voices due to psychosis to be more distressed than someone hearing God.
That said, I was less happy when I believed in God because I felt constantly that I had unmet obligations to him.
If the goal is to arrive at the truth no matter one’s background or extenuating circumstances, I don’t think this list quite does the trick. You want a list of steps such that, if a Muslim generated a list using the same cognitive algorithm, it would lead them to the same conclusion your list will lead you to.
From this perspective, #2 is extremely problematic; it assumes the thing you’re trying to establish from the spiritual experience (the veracity of Christianity). If a muslim wrote this step, it’d look totally different, as it would for any religion. (You do hint at this, props for that.) This step will only get you to the truth if you start out already having the truth.
#7 is problematic from a different perspective; well-being and truth-knowledge are not connected on a fundamental level, most noticeably when people around you don’t know the same things you know. For reference, see Gallileo.
Also, my own thought: if we both agree that your brain can generate surprisingly coherent stuff while dreaming, then it seems reasonable to suppose the brain has machinery capable of the process. So my own null hypothesis is that that machinery can get triggered in ways which produce the content of spiritual experiences.
In addition to your discussion with APMason:
When you have a gut feeling, how do you know whether this is (most likely) a regular gut feeling, or whether this is (most likely) God speaking to you ? Gut feelings are different from visions (and possibly dreams), since even perfectly sane and healthy people have them all the time.
I can’t find the source right now, but AFAIK this isn’t merely a joke, but a parable from somewhere in the Talmud. One of the rabbis wants to build an oven in a way that’s proscribed by the Law (because it’d be more convenient for some engineering reason that I forget), and the other rabbis are citing the Law at him to explain why this is wrong. The point of the parable is that the Law is paramount; not even God has the power to break it (to say nothing of mere mortal rabbis). The theme of rules and laws being ironclad is a trope of Judaism that does not, AFAIK, exist in Christianity.
In the Talmudic story, the voice of God makes a claim about the proper interpretation of the Law, but it is dismissed because the interpretation of the Law lies in the domain of Men, where it is bound by certain peculiar hermeneutics. The point is that Halacha does not flow from a single divine authority, but is produced by a legal tradition.
What? The religious lawyers made up a story to overtly usurp God!
And that’s not what I’m thinking of. It’s probably a joke about the parable, though. But I distinctly recall it NOT having a moral and being on the internet on a site of Jewish jokes.
Bugmaster: Well, go with your gut either way, since it’s probably right.
It could be something really surprising to you that you don’t think makes sense or is true, just as one example. Of course, if not, I can’t think of a good way off the top of my head.
Hmm, are you saying that going with your gut is most often the right choice ? Perhaps your gut is smarter than mine, since I can recall many examples from my own life when trusting my intuitions turned out to be a bad idea. Research likewise shows that human intuition often produces wrong answers to important questions; what we call “critical thinking” today is largely a collection of techniques that help people overcome their intuitive biases. Nowadays, whenever I get a gut feeling about something, I try to make the effort to double-check it in a more systematic fashion, just to make sure (excluding exceptional situations such as “I feel like there might be a tiger in that bush”, of course).
I’m claiming that going with your gut instinct usually produces good results, and when time is limited produces the best results available unless there’s a very simple bias involved and an equally simple correction to fix it.
Sometimes I feel my gut is smarter than my explicit reasoning, as I sometimes, when I have to make a decision in a very limited time, I make a choice which, five seconds later, I can’t fully make sense of, but on further reflection I realize it was indeed the most reasonable possible choice after all. (There might some kind of bias I fail to fully correct for, though.)
If you’ll allow me to butt into this conversation, I have to say that on the assumption that consciousness and identity depend not on algorithms executed by the brain (and which could be executed just as well by transistors), but on a certain special identity attached to your body which cannot be transferred to another—granting that premise—it seems perfectly rational to not want to change hardware. But when you say:
do you mean that you would like the justice system to decide personhood by asking God?
FWIW, I didn’t read it that way. I think it’s just “Also, I’ll follow the laws of secular society, obviously.”
Yeah, mostly that. Am I unclear right now? Maybe I should go take a nap...
Okay. It might not be that you were unclear—it could just have been me.
Our justice system should put in safeguards against what happens if we accidentally appoint ungodly people. That’s the intuition behind deontological morality (some people will cheat or not understand, so we have bureaucracy instead) and it’s the idea behind most laws. The reasoning here is that judges are human. This would of course be different in a theocracy ruled by Jesus, which some Christians (I’m literally so tired right now I can’t remember if this is true or just something some believe, or where it comes from) believe will happen for a thousand years between the tribulation and the end of the world.
What do you have in mind when you say “godly people”?
The qualifications I want for judges are honest, intelligent, benevolent, commonsensical, and conscientious. (Knowing the law is implied by the other qualities since an intelligent, benevolent, conscientious person wouldn’t take a job as a judge without knowing the law.)
Godly isn’t on the list because I wouldn’t trust judges who were chosen for godliness to be fair to non-godly people.
To be fair, many people who consider “godliness” to be a virtue include “benevolent and conscientious” in the definition.
Then you’re using a different definition of “godly” from the one I use.
Part but not all of my definition of “godly”. (Actually, intelligent and commonsensical aren’t part of it. So maybe judges should be godly, intelligent and commonsensical.)
How would you identify godliness for the purpose of choosing judges?
Currently, we still have some safeguards in place that ensure that we don’t accidentally appoint godly people. Our First Amendment, for example, is one of such safeguards, and I believe it to be a very good thing.
The problem with using religion as a basis for public policy is that there’s no way to know (or even estimate), objectively, which religion is right. For example, would you be comfortable if our country officially adopted Sharia law, put Muslim clerics in all the key government positions, and mandated that Islam be taught in schools (*) ? Most Christians would answer “no”, but why not ? Is it because Christianity is the one true religion, whereas Islam is not ? But Muslims say the exact same thing, only in reverse; and so does every other major religion, and there’s no way to know whether any of them are right (other than after death, I suppose, which isn’t very useful). Meanwhile, there are atheists such as myself who believe that the very idea of religion is deeply flawed; where do we fit into this proposed theocracy ?
This is why I believe that decoupling religion from government was an excellent move. If the government is entirely secular, then every person is free to worship the god or gods they believe in, and no person has the right to impose their faith onto others. This system of government protects everyone, Christians included.
(*) I realize that the chances of this actually happening are pretty much nonexistent, but it’s still a useful hypothetical example.
I don’t think that one can say a government is entirely secular, nor can it reasonably be an ideal endlessly striven for. A political apparatus would have to determine what is and isn’t permissible, and any line drawn would be arbitrary.
Suppose a law is passed by a coalition of theist and environmentalist politicians banning eating whales, where the theists think it is wrong for people (in that country) to eat whales as a matter of religious law. A court deciding whether or not the law was impermissibly religiously motivated not only has to try and divine the motives of those involved in passing the law, it would have to decide what probability of passing it would have had, what to counterfactually replace the theists’ values with, etc. and then compare that to some standard.
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Which part of this is intended to prevent the appointment of godly judges? The guarantee that we won’t go killing people for heresy? Or the guarantee that you have freedom of speech and the freedom to tell the government you’d like it to do a better job on something?
Unless by “godly” you mean “fanatical extremists who approve of terrorism and/or fail to understand why theocracies only work in theory and not in practice”. In which case I agree, but that wasn’t my definition of that word.
No. You predict correctly.
Yes. And because I expect Sharia law to directly impinge on the freedoms that I rightly enjoy in secular society and would also enjoy if godly and sensible people (here meaning moral Christians who have a basic grasp of history, human nature, politics and rationality) were running things. And because I disapprove of female circumcision and the death penalty for gays. And because I think all the clothing I’d have to wear would be uncomfortable, I don’t like gloves, black is nice but summer in California calls for something other than head-to-toe covering in all black, I prefer to dress practically and I have a male friend I’d like to not be separated from.
Some of the general nature of these issues showed up in medieval Europe. That’s because they’re humans-with-authority issues, not just issues with Islam. (At least, not with Islam alone.)
Yes, but they’re wrong.
We can test what they claim is true. For instance, Jehovah’s Witnesses think it’ll be only a very short time until the end of the world, too short for political involvement to be useful (I think). So if we wait and the world doesn’t end and we ascertain that had more or fewer people been involved in whatever ways we could have had outcomes that would have been better or worse, we can disprove a tenet of that sect.
The one with the Muslims? Probably as corpses. Are you under the impression that I’ve suggested a Christian theocracy instead?
Concur. I don’t want our country hobbled by Baptists and Catholics arguing with each other.
Of course, the government could mandate atheism, or allow people to identify as whatever while prohibiting them from doing everything their religion calls for (distributing Gideon Bibles at schools, wearing a hijab in public, whatever). Social pressure is also a factor, one which made for an oppressive, theocraticish early America even though we had the First Amendment.
When it works, it really works. You’ll find no disagreement from anyone with a modicum of sense.
Understood. When most Christian say things like, “I wish our elected official were more godly”, they usually mean, “I really wish we lived in a Christian theocracy”, but I see now that you’re not one of these people. In this case, would you vote for an atheist and thus against a Christian, if you thought that the atheist candidate’s policies were more beneficial to society than his Christian rival’s ?
Funny, that’s what they say about you...
This is an excellent idea, but it’s not always practical; otherwise, most people would be following the same religion by now. For example, you mentioned that you don’t want to wear uncomfortable clothing or be separated from your male friend (to use some of the milder examples). Some Muslims, however (as well as some Christians), believe that doing these things is not merely a bad idea, but a mortal sin, a direct affront to their god (who, according to them, is the one true god), which condemns the sinner to a fiery hell after death. How would you test whether this claim was true or not ?
Even though I’m an atheist, I believe this would be a terrible idea.
Well, this all depends on what you believe in. For example, some theists believe (or at least claim to believe) that certain actions—such as wearing the wrong kind of clothes, or marrying the wrong kinds of people, etc. -- are mortal sins that provoke God’s wrath. And when God’s wrath is made manifest, it affects the entire nation, not just the individual sinners (there are plenty of Bible verses that seem to be saying the same thing).
If this belief is true, then stopping people from wearing sinful clothing or marrying in a sinful way or whatever is not merely a sensible thing to do, but pretty much a moral imperative. This is why (as far as I understand) some Christians are trying to turn our government into a Christian theocracy: they genuinely believe that it is their moral duty to do so. Since their beliefs are ultimately based on faith, they are not open to persuasion; and this is why I personally love the idea of a secular government.
Possibly. Depends on how much better, how I expected both candidates’ policies to change and how electable I considered them both.
I wouldn’t. But I would test accompanying claims. For this particular example, I can’t rule out the possibility of ending up getting sent to hell for this until I die. However, having heard what supporters of those policies say, I know that most Muslims who support this sort of idea of modest clothing claim that it causes women to be more respected, causes men exposed only to this kind of woman to be less lustful and some even claim it lowers the prevalence of rape. As I receive an optimal level of respect at the moment, I find the first claim implausible. Men in countries where it happens are more sexually frustrated and more likely to end up blowing themselves up. Countries imposing these sorts of standards harm women even more than they harm men. So that’s implausible. And rape occurs less in cultures with more unsexualized nudity, which would indicate only a modest protective effect or none at all, or could even indicate that more covering up causes more rape.
It’s not 100% out of the question that the universe has an evil god who orders people to do stupid things for his own amusement.
I say you’re wrong about atheism, but you don’t consider that strong evidence in favor of Christianity.
Ah. I see. Sounds plausible… ish… sort of.
That’s perfectly reasonable, but see my comments below.
Ok, so you’ve listed a bunch of empirically verifiable criteria, and evaluated them. This approach makes sense to me… but… it sounds to me like you’re making your political (“atheist politician vs. Christian politician”) and moral (“should I wear a burqa”) choices based primarily (or perhaps even entirely) on secular reasoning. You would support the politician who will implement the best policies (and who stands a chance of being elected at all), regardless of his religion; and you would oppose social polices that demonstrably make people unhappy—in this life, not the next. So, where does “godliness” come in ?
I agree, but then, I don’t have faith to inform me of any competing gods’ existence. I imagine that if I had faith in a non-evil Christian god, who is also the only god, I’d peg the probability of the evil god’s existence at exactly 0%. But it’s possible that I’m misunderstanding what faith feels like “from the inside”.
Uh oh. :-)
I’m under the impression that you’ve just endorsed a legal system which safeguards against the consequences of appointing judges who don’t agree with Christianity’s model of right and wrong, but which doesn’t safeguard against the consequences appointing judges who don’t agree with other religions’ models of right and wrong.
Am I mistaken?
If you are endorsing that, then yes, I think you’ve endorsed a violation of the Establishment Clause of the First Amendment as generally interpreted.
Regardless, I absolutely do endorse testing the claims of various religions (and non-religions), and only acting on the basis of a claim insofar as we have demonstrable evidence for that claim.
It might be because it’s late, but I’m confused about your first paragraph. Can you clarify?
These two quotes are an interesting contrast to me. I think the Enlightenment concept of tolerance is an essential principle of just government. But you believe that there is a right answer on the religion question. Why does tolerance make any sense to you?
How not? Hasn’t it basically always resulted in either cruelty or separatism? The former is harmful to others, the latter dangerous to those who practice it. Are we defining tolerance differently? Tolerance makes sense to me for the same reason that if someone came up to me and said that the moon was made of green cheese because Omega said so, and then I ended up running into a whole bunch of people who said so and rarely listened to sense, I would not favor laws facilitating killing them. And if they said that it would be morally wrong for them to say otherwise, I would not favor causing them distress by forcing them to say things they think are wrong. Even though it makes no sense, I would avoid antagonizing them because I generally believe in not harming or antagonizing people.
Don’t you? If you’re an atheist, don’t you believe that’s the right answer?
It seems logically possible to me that government could favor a particular sect without necessarily engaging in immoral acts. For the favored sect, the government could pay the salary of pastors and the construction costs of churches. Education standards (even for home-schooled children) could include knowledge of particular theological positions of the sect. Membership could be a plus-factor in applying for government licenses or government employment.
As you note, human history strongly suggests government favoritism wouldn’t stop there and would proceed to immoral acts. But it is conceivable, right? (And if we could edit out in-group bias, I think that government favoritism is the rational response to the existence of an objectively true moral proposition).
And you are correct that I used imprecise language about knowing the right answer on religion.
It is conceivable. I consider it unlikely. It would probably be the beginning of a slippery slope, so I reject it on the grounds that it will lead to bad things.
Plus I wouldn’t know which sect it should be, but we can rule out Catholicism, which will really make them angry, and all unfavored sects will grumble. (Some Baptists believe all Catholics are a prophesied evil. Try compromising between THEM.) And, you know, this very idea is what prompted one of the two genocides that brought part of my family to the New World.
And the government could ask favors of the sect in return for these favors, corrupting its theology.
By hypothesis, the sect chosen is the one that is true.
You are correct, some Christians believe that.
You are probably thinking of premillenialism, which is a fairly common belief among Protestant denominations (particularly evangelical ones), but not a universal one. Catholic and Orthodox churches both reject it. As best I can tell it’s fundamentally a Christian descendant of the Jewish messianic teachings, which are pretty weakly supported textually but tend to imply a messiah as temporal ruler; since Christianity already has its messiah, this in turn implies a second coming well before the final judgment and the destruction of the world. Eschatology in general tends to be pretty varied and speculative as theology goes, though.
Please define “soul”.
Also: transcranial magnetic stimulation, pharmaceuticals and other chemicals, physical damage...
Makes sense enough.
For my own part, two things:
I entirely agree with you that various forms of mistaken and fraudulent identity, where entities falsely claim to be me or are falsely believed to be me, are problematic. Indeed, there are versions of that happening right now in the real world, and they are a problem. (That last part doesn’t have much to do with AI, of course.)
I agree that people being modified without their consent is problematic. That said, it’s not clear to me that I would necessarily be more subject to being modified without my consent as a computer than I am as whatever I am now—I mean, there’s already a near-infinite assortment of things that can modify me without my consent, and there do exist techniques for making accidental/malicious modification of computers difficult, or at least reversible. (I would really have appreciated error-correction algorithms after my stroke, for example, or at least the ability to restore my mind from backup afterwards. So the idea that the kind of thing I am right now is the ne plus ultra of unmodifiability rings false for me.)
Who wants to turn you into a computer? I’m confused. I don’t want to turn anybody into anything, I have no sovereignty there nor would I expect it.
EY and Robin Hanson approve of emulating people’s brains on computers.
Approving of something in principle doesn’t necessarily translate into believing it should be mandatory regardless of the subject’s feelings on the matter, or even into advocating it in any particular case. I’d be surprised if EY in particular ever made such an argument, given the attitude toward self-determination expressed in his Metaethics and Fun Theory sequences; I am admittedly extrapolating from only tangentially related data, though. Not sure I’ve ever read anything of his dealing with the ethics of brain simulation, aside from the specific and rather unusual case given in Nonperson Predicates and related articles.
Robin Hanson’s stance is a little different; his emverse is well-known, but as best I can tell he’s founding it on grounds of economic determinism rather than ethics. I’m hardly an expert on the subject, nor an unbiased observer (from what I’ve read I think he’s privileging the hypothesis, among other things), but everything of his that I’ve read on the subject parses much better as a Cold Equations sort of deal than as an ethical imperative.
And? Does that mean forcing you to be emulated?
Good point.
I’m sure you’re pro self determination right? Or are you? One of the things that pushed me away from religion in the beginning was there was no space for self determination(not that there is much from a natural perspective), the idea of being owned is not nice one to me. Some of us don’t want watch ourselves rot in a very short space of time.
Um, according to the Bible, the Abrahamic God’s supposed to have done some pretty awful things to people on purpose, or directed humans to do such things. It’s hard to imagine anything more like the definition of a petty tyrant than wiping out nearly all of humanity because they didn’t act as expected; exhorting people to go wipe out other cultures, legislating victim blame into ethics around rape, sending actual fragging bears to mutilate and kill irreverent children?
I’m not the sort of person who assumes Christians are inherently bad people, but it’s a serious point of discomfort with me that some nontrivial portion of humanity believes that a being answering to that description and those actions a) exists and b) is any kind of moral authority.
If a human did that stuff, they’d be described as whimsical tyrants at the most charitable. Why’s God supposed to be different?
While I agree with some of your other points, I’m not sure about this:
We shouldn’t be too harsh until we are faced with either deleting a potentially self-improving AI that is not provably friendly or risking the destruction of not just our species but the destruction of all that we value in the universe.
That.… is a surprisingly good answer.
I don’t understand the analogy. I see how deleting a superhuman AI with untold potential is a lot like killing many humans, but isn’t it a point of God’s omnipotence that humans can never even theoretically present a threat to Him or His creation (a threat that he doesn’t approve of, anyway)?
Within the fictional universe of the Old and New Testaments, it seems clear that God has certain preferences about the state of the world, and that for some unspecified reason God does not directly impose those preferences on the world. Instead, God created humans and gave them certain instructions which presumably reflect or are otherwise associated with God’s preferences, then let them go do what they would do, even when their doing so destroys things God values. And then every once in a while, God interferes with their doing those things, for reasons that are unclear.
None of that presupposes omnipotence in the sense that you mean it here, although admittedly many fans of the books have posited the notion that God possesses such omnipotence.
That said, I agree that the analogy is poor. Then again, all analogies will be poor. A superhumanly powerful entity doing and refraining from doing various things for undeclared and seemingly pointless and arbitrary motives is difficult to map to much of anything.
Yeah, I kind of realize that the problems of omnipotence, making rocks that one can’t lift and all that, only really became part of the religious discourse in a more mature and reflection-prone culture, the ways of which would already have felt alien to the OT’s authors.
Taking the old testament god as he is in the book of Genesis this isn’t clear at all. At least when talking about the long term threat potential of humans.
or
The whole idea of what exactly God is varied during the long centuries in which the stories where written.
Do you have an opinion about whether an AI that wasn’t an em could have a soul?
No. I haven’t tested it. I haven’t ever seen an AI or anything like that. I don’t know what basis I’d have for theorizing.
My apologies. Interesting questions none the less.
Hey, why did you retract this? That would have netted upvotes!
Didn’t realise what retracted did.
Comment score below threshold, 306 replies. (Now 307). Is this a record?
It does suggest that the “newest comment” section is sufficient to sustain a discussion.