Mostly various fake explanations and ways of avoiding noticing confusion, e.g. catch-all “explanations” like “mass hallucination”, “abnormal weather phenomena”, “coincidence”, &c. Also bad are vague categories like “pseudoscience” which are almost entirely about status.
I first came into contact with the skeptic community while looking at some of the more intelligent papers in the ufology literature. You’d get people, e.g. intelligent physicists, who were obviously just trying to make sense of the situation as best they could, getting a lot of eye-witness testimony, thinking of plausible mechanisms, &c., and then the skeptics would reply with papers along the lines of “how dare you even look at something so contemptible, let alone take it seriously, the explanation is obviously mass hallucination or weird weather phenomena or something, I’m going to ignore everything you said and instead spend many paragraphs talking about how stupid ufology is because ufology is so stupid, why are you being so stupid”.
The anti-parapsychology papers weren’t as bad but they were still pretty bad; I noticed that skeptics have this nasty habit of manipulating statistics to make a point, even moreso than the parapsychologists. (Which I think caused me to update too much towards thinking the parapsychology literature is actually worthwhile; you know, “if the skeptics have to manipulate the statistics so much to prove their point, then maybe the parapsychologists really are on to something”.) A fair amount of the manipulation on both sides involves patently bogus claims about file drawer effects.
I also think the new atheists are pretty bad, but I think my audience would demand more justification for that claim than I’m prepared to give.
(ETA: To some extent I’d be willing to forgive skeptics if they applied their skepticism evenly; then they’d be more like skeptics in the Greek sense, which is justifiable even if not pragmatic. But as is they mostly accept whatever popular science and liberaltarian ideology tells them to accept, and in general it’s almost entirely about promoting or denouncing things according to un-reflected-upon Enlightenment ideology.)
(Needless self-disclosing side comment:) I’m probably more coherent & reasonable because I’m buzzed & caffeinated and am commenting with the aim of conversing with people rather than the aim of avoiding culpability for not even having tried to warn people that they are making predictably-retrospectively-stupid mistakes. I don’t normally have enough motivational resources to actually try to talk to people, alas.
(I suspect that whatever motivational quirks I have are largely the result of genetic/neurological predisposition, at least up until around the age of 16 when I got a girlfriend, which is when my motivational system started to get really messed up. (Depressed girlfriend who was convinced I didn’t love her; I had to meet impossible standards, had to reliably guess in advance which impossible standards I would be expected to have already met, that kinda thing. Resulted in (double-negative?-)obsession with things like this. But I’m not entirely sure what sort of data you’re looking to get with your question.))
“Communication environment” is something I came up with on the fly, it’s not a technical term, though maybe it should be.
I was thinking about how accuracy, trust, and kindness were handled around and with you when you were a kid.
The thing is, being straightforward with people is apparently very hard work for you, and I’m wondering whether straightforwardness was ignored and/or punished when you were forming basic emotional reactions.
Not the same problem you’ve got: transcript of Ira Glass talking with Mike Daisy. Daisy had done a substantial amount of lying about conditions in an Apple factory in China, and Ira Glass’ radio show didn’t do quite enough checking to catch it, so a bunch of falsehoods went out nationally.
What caught my eye was how impossible it was for Daisy to prioritize facts over emotional effects, and I wonder if an emotional pattern like that happens by accident.
It’s possible that I’m underestimating neurological predisposition, though.
That is a very fascinating case study in how people try to get out of double bind moral obligations where one has to choose between an explicit or an implicit lie, especially when negative sum signalling games have resulted in an equilibrium where explicitly telling the truth would subjectively seem as if it was almost guaranteed to convey zero or even negative information (and thus result in in the explicit-truth-teller’s moral blameworthiness). I’m disappointed that Act Three wasn’t explicitly about that, and that in Act Two Ira doesn’t help Daisey explain the nature of the conflicting moral obligations… but I suppose that that angle would have gone over the heads of the listeners, and to most listeners it would have seemed as if Ira/NPR was trying to save face with philosophical mumbo-jumo, so Ira/NPR was forced to go the guilt-tripping route. It was really interesting anyway I guess.
I was thinking about how accuracy, trust, and kindness were handled around and with you when you were a kid.
I don’t have any reason to suspect anything abnormal, but I have very little random access to my memories.
He was downvoted pretty quickly, perhaps because he was encouraging my skeptic-slandering or encouraging off-topic discussion?
Downvoted to −1 then back up at 0 when I edited/deleted/recommented. I attributed the early vote to just something personal against either one of us. We both get those from time to time but they tend to be averaged out given time—at least my ones do. Yours have a bit more weight behind them.
I’ve noticed such user-specific downvotes tend to be a lot more common lately, not just for old folk like us but new folk too. E.g. User:ABrooks made a post about FAI that didn’t fit in with local ideas, and consequently almost all of his comments were immediately downvoted. Only −1, but that’s enough to significantly bias folks’ intuitions about how charitable they should be when reading a comment. Various people have noticed weird voting patterns recently, normally in the form of heavy downvoting of seemingly relatively innocuous comments. I’ve also noticed that “yay our side, boo their side” comments tend to be very highly upvoted, moreso than a year or two ago. Nothing to do about it, but it might be worth a discussion posts along the lines of “LessWrong has become somewhat more stupid lately, don’t take the downvotes too personally”. But probably not. (It’s not like LessWrong was ever that elite anyway; too much evaporative cooling which resulted in a lot of people who strongly agree with Eliezer even when he’s wrong and even when they don’t know why he’s right. (I used to lean in that direction.) But it’s still kinda sad; there aren’t any publicly open alternatives.)
. User:ABrooks made a post about FAI that didn’t fit in with local ideas, and consequently almost all of his comments were immediately downvoted.
Link? I don’t see a post by him.
Edit: Found it. It’s one I downvoted but without it having enough impact on me to even remember that ABrooks is a user. I believe I stopped reading after the first couple of paragraphs after it introduce a premise that seemed fundamentally absurd. Something to do with it being not being theoretically possible to create an AI without teaching it to think through interaction. (I mean… what? Identify the thing that is an AI after it has been taught to think then combine bits of matter in such a way that you have that AI. Basic physical reductionism!)
I’m a little surprised that he got mass downvoted (ie. of other comments, not that particular post). For that matter I’m a little surprised that the specific post got significantly downvoted. Usually things far more stupid than that stay positive*. Did he get into personal bickering with a specific individual at all? That’s what I usually associate with mass downvotes.
* “Usually things far more stupid than that stay positive” of course really means “of posts that are far more stupid than that immediately spring to my mind most are those that are not downvoted.”
Something to do with it being not being theoretically possible to create an AI without teaching it to think through interaction. (I mean… what? Identify the thing that is an AI after it has been taught to think then combine bits of matter in such a way that you have that AI. Basic physical reductionism!)
Well, one could make a computational complexity argument that there is no way to ” Identify the thing that is an AI after it has been taught to think” other then actually interacting with it.
Sure, but once you do so, then you can build another that you didn’t interact with.
On the other hand, how much variation do you need to introduce before you can declare that the second copy is a different intelligence than the one that you copied it from? And how sure can you be that it’s still an AI after this variation? So there’s an argument to be made there, although I’m far from convinced for now.
Nothing to do about it, but it might be worth a discussion posts along the lines of “LessWrong has become somewhat more stupid lately, don’t take the downvotes too personally”.
I have the reverse message. I say be willing just take them personally when appropriate. I don’t really mind people having a personal problem with me but if people sincerely negatively evaluate comments that I consider high quality then that distresses me. After all if I hear “Fuck you! You’re a dick.” then the subject matter is subjective and they may have a point. If I hear “You’re wrong!” then I may, after double checking, actually have to evaluate the accuser as being poor at thinking. Too much of that just leads to contempt and bitterness.
It’s not like LessWrong was ever that elite anyway; too much evaporative cooling which resulted in a lot of people who strongly agree with Eliezer even when he’s wrong and even when they don’t know why he’s right.
(Old-Timer Topper:) That’s nutthin! Do you actually think kids these days have read enough rationality literature—from Eliezer or otherwise—for them to be able to even know which beliefs to take on faith without knowing why? I don’t see much in the way of (correct) application of rationality principles for me to be declaring as done without basis.
Now that’s really curious. I wonder why wedrifid deleted the original.
The original didn’t give Will the explanation. I deleted and recommented instead of editing because I thought Will may miss the explicit-non rhetorical part since he would not be notified. Then there was just a race condition.
I wanted to be clear that I was communicating in good faith because I am epidemically opposed to Will on some issues and if my comment was evaluated as part of the reference class “typical human use of language” the meaning of my phrasing would not be one of sincere inquiry.
An even handed discussion of the kind of evidence frequently offered by those who believe believe in anomalies and those who don’t, and what sort of evidence and argument should be offered.
How I found it: long ago, I read a magazine called the Zetetic [something] in Robert Anton Wilson. Unlike believer publications and skeptic publications, it was an effort to really look at the details of arguments. Unlike believer publications (which are numerous) and skeptic publications (of which I know only one), it never found much of an audience, and didn’t last.
However, googling turned up Mario Truzzi, who wanted there to be a zetetic influence at the Skeptical Inquirer, and that led me to the link I referenced.
Amateur scientistic groupies The tendency to react with one comment statements is both understandable—if an issue is sufficiently explored, yet new people come with the same stupid idea again and again and again. And it is also rather annoying. Look up the meme video on “Shit skeptics say”.
Mostly various fake explanations and ways of avoiding noticing confusion, e.g. catch-all “explanations” like “mass hallucination”, “abnormal weather phenomena”, “coincidence”, &c. Also bad are vague categories like “pseudoscience” which are almost entirely about status.
I first came into contact with the skeptic community while looking at some of the more intelligent papers in the ufology literature. You’d get people, e.g. intelligent physicists, who were obviously just trying to make sense of the situation as best they could, getting a lot of eye-witness testimony, thinking of plausible mechanisms, &c., and then the skeptics would reply with papers along the lines of “how dare you even look at something so contemptible, let alone take it seriously, the explanation is obviously mass hallucination or weird weather phenomena or something, I’m going to ignore everything you said and instead spend many paragraphs talking about how stupid ufology is because ufology is so stupid, why are you being so stupid”.
The anti-parapsychology papers weren’t as bad but they were still pretty bad; I noticed that skeptics have this nasty habit of manipulating statistics to make a point, even moreso than the parapsychologists. (Which I think caused me to update too much towards thinking the parapsychology literature is actually worthwhile; you know, “if the skeptics have to manipulate the statistics so much to prove their point, then maybe the parapsychologists really are on to something”.) A fair amount of the manipulation on both sides involves patently bogus claims about file drawer effects.
I also think the new atheists are pretty bad, but I think my audience would demand more justification for that claim than I’m prepared to give.
(ETA: To some extent I’d be willing to forgive skeptics if they applied their skepticism evenly; then they’d be more like skeptics in the Greek sense, which is justifiable even if not pragmatic. But as is they mostly accept whatever popular science and liberaltarian ideology tells them to accept, and in general it’s almost entirely about promoting or denouncing things according to un-reflected-upon Enlightenment ideology.)
Now that’s really curious. I wonder why wedrifid deleted the original.
I am also confused. He was downvoted pretty quickly, perhaps because he was encouraging my skeptic-slandering or encouraging off-topic discussion?
Yeah, but your response was (forgive me for saying uncharacteristically) coherent and reasonable.
(Needless self-disclosing side comment:) I’m probably more coherent & reasonable because I’m buzzed & caffeinated and am commenting with the aim of conversing with people rather than the aim of avoiding culpability for not even having tried to warn people that they are making predictably-retrospectively-stupid mistakes. I don’t normally have enough motivational resources to actually try to talk to people, alas.
When you were a kid, what sort of communication environment were you living in?
Is this a technical term? Google isn’t helping.
(I suspect that whatever motivational quirks I have are largely the result of genetic/neurological predisposition, at least up until around the age of 16 when I got a girlfriend, which is when my motivational system started to get really messed up. (Depressed girlfriend who was convinced I didn’t love her; I had to meet impossible standards, had to reliably guess in advance which impossible standards I would be expected to have already met, that kinda thing. Resulted in (double-negative?-)obsession with things like this. But I’m not entirely sure what sort of data you’re looking to get with your question.))
“Communication environment” is something I came up with on the fly, it’s not a technical term, though maybe it should be.
I was thinking about how accuracy, trust, and kindness were handled around and with you when you were a kid.
The thing is, being straightforward with people is apparently very hard work for you, and I’m wondering whether straightforwardness was ignored and/or punished when you were forming basic emotional reactions.
Not the same problem you’ve got: transcript of Ira Glass talking with Mike Daisy. Daisy had done a substantial amount of lying about conditions in an Apple factory in China, and Ira Glass’ radio show didn’t do quite enough checking to catch it, so a bunch of falsehoods went out nationally.
What caught my eye was how impossible it was for Daisy to prioritize facts over emotional effects, and I wonder if an emotional pattern like that happens by accident.
It’s possible that I’m underestimating neurological predisposition, though.
That is a very fascinating case study in how people try to get out of double bind moral obligations where one has to choose between an explicit or an implicit lie, especially when negative sum signalling games have resulted in an equilibrium where explicitly telling the truth would subjectively seem as if it was almost guaranteed to convey zero or even negative information (and thus result in in the explicit-truth-teller’s moral blameworthiness). I’m disappointed that Act Three wasn’t explicitly about that, and that in Act Two Ira doesn’t help Daisey explain the nature of the conflicting moral obligations… but I suppose that that angle would have gone over the heads of the listeners, and to most listeners it would have seemed as if Ira/NPR was trying to save face with philosophical mumbo-jumo, so Ira/NPR was forced to go the guilt-tripping route. It was really interesting anyway I guess.
I don’t have any reason to suspect anything abnormal, but I have very little random access to my memories.
In aqua veritas, in vino sanitas.
Downvoted to −1 then back up at 0 when I edited/deleted/recommented. I attributed the early vote to just something personal against either one of us. We both get those from time to time but they tend to be averaged out given time—at least my ones do. Yours have a bit more weight behind them.
I’ve noticed such user-specific downvotes tend to be a lot more common lately, not just for old folk like us but new folk too. E.g. User:ABrooks made a post about FAI that didn’t fit in with local ideas, and consequently almost all of his comments were immediately downvoted. Only −1, but that’s enough to significantly bias folks’ intuitions about how charitable they should be when reading a comment. Various people have noticed weird voting patterns recently, normally in the form of heavy downvoting of seemingly relatively innocuous comments. I’ve also noticed that “yay our side, boo their side” comments tend to be very highly upvoted, moreso than a year or two ago. Nothing to do about it, but it might be worth a discussion posts along the lines of “LessWrong has become somewhat more stupid lately, don’t take the downvotes too personally”. But probably not. (It’s not like LessWrong was ever that elite anyway; too much evaporative cooling which resulted in a lot of people who strongly agree with Eliezer even when he’s wrong and even when they don’t know why he’s right. (I used to lean in that direction.) But it’s still kinda sad; there aren’t any publicly open alternatives.)
Link? I don’t see a post by him.
Edit: Found it. It’s one I downvoted but without it having enough impact on me to even remember that ABrooks is a user. I believe I stopped reading after the first couple of paragraphs after it introduce a premise that seemed fundamentally absurd. Something to do with it being not being theoretically possible to create an AI without teaching it to think through interaction. (I mean… what? Identify the thing that is an AI after it has been taught to think then combine bits of matter in such a way that you have that AI. Basic physical reductionism!)
I’m a little surprised that he got mass downvoted (ie. of other comments, not that particular post). For that matter I’m a little surprised that the specific post got significantly downvoted. Usually things far more stupid than that stay positive*. Did he get into personal bickering with a specific individual at all? That’s what I usually associate with mass downvotes.
* “Usually things far more stupid than that stay positive” of course really means “of posts that are far more stupid than that immediately spring to my mind most are those that are not downvoted.”
Well, one could make a computational complexity argument that there is no way to ” Identify the thing that is an AI after it has been taught to think” other then actually interacting with it.
Sure, but once you do so, then you can build another that you didn’t interact with.
On the other hand, how much variation do you need to introduce before you can declare that the second copy is a different intelligence than the one that you copied it from? And how sure can you be that it’s still an AI after this variation? So there’s an argument to be made there, although I’m far from convinced for now.
I have the reverse message. I say be willing just take them personally when appropriate. I don’t really mind people having a personal problem with me but if people sincerely negatively evaluate comments that I consider high quality then that distresses me. After all if I hear “Fuck you! You’re a dick.” then the subject matter is subjective and they may have a point. If I hear “You’re wrong!” then I may, after double checking, actually have to evaluate the accuser as being poor at thinking. Too much of that just leads to contempt and bitterness.
(Old-Timer Topper:) That’s nutthin! Do you actually think kids these days have read enough rationality literature—from Eliezer or otherwise—for them to be able to even know which beliefs to take on faith without knowing why? I don’t see much in the way of (correct) application of rationality principles for me to be declaring as done without basis.
The original didn’t give Will the explanation. I deleted and recommented instead of editing because I thought Will may miss the explicit-non rhetorical part since he would not be notified. Then there was just a race condition.
I wanted to be clear that I was communicating in good faith because I am epidemically opposed to Will on some issues and if my comment was evaluated as part of the reference class “typical human use of language” the meaning of my phrasing would not be one of sincere inquiry.
An even handed discussion of the kind of evidence frequently offered by those who believe believe in anomalies and those who don’t, and what sort of evidence and argument should be offered.
How I found it: long ago, I read a magazine called the Zetetic [something] in Robert Anton Wilson. Unlike believer publications and skeptic publications, it was an effort to really look at the details of arguments. Unlike believer publications (which are numerous) and skeptic publications (of which I know only one), it never found much of an audience, and didn’t last.
However, googling turned up Mario Truzzi, who wanted there to be a zetetic influence at the Skeptical Inquirer, and that led me to the link I referenced.
A careful skeptical investigation of a UFO claim. I’m citing it because it’s much better than just saying “hallucination”.
Amateur scientistic groupies The tendency to react with one comment statements is both understandable—if an issue is sufficiently explored, yet new people come with the same stupid idea again and again and again. And it is also rather annoying. Look up the meme video on “Shit skeptics say”.
I used to do it, working against it now.