Hm. This may be fuzzy memory on my part, but I thought I remembered downvoting this post, seeing it at −5, and now it’s at 0 and I haven’t downvoted it. I really hope that’s fuzzy memory on my part.
That was an alternate universe. As this post was heavily downvoted, hardly any LWers took the challenge, depriving SIAI of the money they’d have gotten by referring successful challengers. Also, because of information cascades, the same thing happened to all future Quixey posts, leading Quixey to eventually stop posting here. Because of the negative word-of-mouth from such incidents, people stopped looking at the LW audience as a set of eyeballs to monetise.
Consequently, the SIAI was deprived of all potential advertising income, and lacked the budget to perfect FAI theory in time. Meanwhile, the Chinese government, after decadesofeffort, managed to develop a uFAI. Vishwabandhu Gupta of India managed to convince his countrymen that an AI is some sort of intelligence-enhancing ayurvedic wonder-drug that the Chinese had illegally patented. Consequently, the Indians eagerly invaded China, believing that increased intelligence would allow their kids to get into good colleges. This localised conflict blew up into the Artilect War, which killed everyone in the planet.
So please… don’t do that again. Just don’t. I’m tired of having to travel to an alternate universe every time that happens.
By not wanting advertising on LW, I have doomed humanity? Your sense of perspective is troubling. (You should also be ashamed of the narrative fallacy that follows.)
If the LW community’s votes are being overridden somehow, I would at least like the LW editors to be honest about it.
Because, clearly, it is impossible for something as huge as millions of lives to depend on a an Art academy’s decision.
Imagine, with every rejection letter the dean of admissions sends out, he has a brief moment of worry: “is this letter going to put someone on the path to becoming a mass murderer?” His sense of perspective would also be troubling, as his ability to predict the difference acceptance will have on his students’ lives is insufficient to fruitfully worry about those sorts of events. It’s not a statement of impossibility, it’s a statement of improbability. Giving undue weight to the example of Hitler is availability bias.
O RLY?
Yes, really. I presume you’ve read about fictional evidence and the conjunction fallacy? If you want to argue that LW’s eyeballs should be monetized, argue that directly! We’ll have an interesting discussion out in the open. But assuming that LW’s eyeballs should be monetized because you can construct a story in which a few dollars makes the difference between the SIAI succeeding and failing is not rational discourse. Put probabilities on things, talk about values, and we’ll do some calculations.
But assuming that LW’s eyeballs should be monetized because you can construct a story in which a few dollars makes the difference between the SIAI succeeding and failing is not rational discourse.
I’d have thought that the story being as far-fetched and ludicrous as it is would’ve made it obvious that I was just fooling around, not making an argument. Apparently that’s not actually the case.
My apologies if I accidentally managed to convince someone of the necessity of monetizing LW’s eyeballs.
If I upvote/downvote comments on an LW page, then close the page a few moments afterwards, sometimes my votes don’t register (they’re not there if I visit the same page later). If something similar happened to you that might explain why your vote seemed to disappear.
This comment thread strikes me as a good example of an anti-pattern I’ve seen before, that I don’t know a name for (close to, but not exactly, privileging the hypothesis), where a conversation slides without explicit comment from reasonably suggesting a bad-case possibility to taking it for granted for no apparent reason.
(disclaimer: I work for Quixey, conflict of interest and all that, but I’m pretty sure I’d be making this exact same comment if I didn’t)
That is good to know. I suspect the probability that I closed the page shortly thereafter is only about .2 or so, but that’s significantly higher than the prior I put on the LW editing staff removing downvotes, which has significantly decreased my worry.
Hm. This may be fuzzy memory on my part, but I thought I remembered downvoting this post, seeing it at −5, and now it’s at 0 and I haven’t downvoted it. I really hope that’s fuzzy memory on my part.
Downvoted the post based on the intervention you described. Normally I’d have upvoted.
I do want to stress that I’m not certain I downvoted the post before I wrote this comment. It’s plausible that 5 people upvoted the post because they wanted it to be visible. That’s still an intervention I’m uneasy about, but the unease is much lower.
Basically, if anyone was asked to vote the post up, rather than seeing the post and thinking “I want more of this on LW.” I apologize for not making that implication clearer. I’ve only seen this post at 0 or negative karma (but I’m not tracking it closely), which seems to me like people not wanting it to be negative rather than roughly equal groups liking and disliking it.
I upvoted the post because it had negative karma, and was not a post that I thought should be at negative karma.
In general I vote posts/comments in the direction I think their karma should be at. Thus for instance I downvoted Clippy’s comment above because I did not think it was so insightful that it merited 20+ karma. I would not have downvoted it if it were at 0 karma.
I assume many people take this approach (it fits in nicely with consequentialism) so this probably explains what you saw.
As do I. While it is slightly distracting if LessWrong administrators are giving certain posts preferential treatment against community wishes, it is extremely worrisome if an attacker has convinced them to actually falsify voting records, and indicative of a particularly insidious social engineering attack.
Hm. This may be fuzzy memory on my part, but I thought I remembered downvoting this post, seeing it at −5, and now it’s at 0 and I haven’t downvoted it. I really hope that’s fuzzy memory on my part.
That was an alternate universe. As this post was heavily downvoted, hardly any LWers took the challenge, depriving SIAI of the money they’d have gotten by referring successful challengers. Also, because of information cascades, the same thing happened to all future Quixey posts, leading Quixey to eventually stop posting here. Because of the negative word-of-mouth from such incidents, people stopped looking at the LW audience as a set of eyeballs to monetise.
Consequently, the SIAI was deprived of all potential advertising income, and lacked the budget to perfect FAI theory in time. Meanwhile, the Chinese government, after decades of effort, managed to develop a uFAI. Vishwabandhu Gupta of India managed to convince his countrymen that an AI is some sort of intelligence-enhancing ayurvedic wonder-drug that the Chinese had illegally patented. Consequently, the Indians eagerly invaded China, believing that increased intelligence would allow their kids to get into good colleges. This localised conflict blew up into the Artilect War, which killed everyone in the planet.
So please… don’t do that again. Just don’t. I’m tired of having to travel to an alternate universe every time that happens.
By not wanting advertising on LW, I have doomed humanity? Your sense of perspective is troubling. (You should also be ashamed of the narrative fallacy that follows.)
If the LW community’s votes are being overridden somehow, I would at least like the LW editors to be honest about it.
Because, clearly, it is impossible for something as huge as millions of lives to depend on a an Art academy’s decision.
O RLY?
Imagine, with every rejection letter the dean of admissions sends out, he has a brief moment of worry: “is this letter going to put someone on the path to becoming a mass murderer?” His sense of perspective would also be troubling, as his ability to predict the difference acceptance will have on his students’ lives is insufficient to fruitfully worry about those sorts of events. It’s not a statement of impossibility, it’s a statement of improbability. Giving undue weight to the example of Hitler is availability bias.
Yes, really. I presume you’ve read about fictional evidence and the conjunction fallacy? If you want to argue that LW’s eyeballs should be monetized, argue that directly! We’ll have an interesting discussion out in the open. But assuming that LW’s eyeballs should be monetized because you can construct a story in which a few dollars makes the difference between the SIAI succeeding and failing is not rational discourse. Put probabilities on things, talk about values, and we’ll do some calculations.
I’d have thought that the story being as far-fetched and ludicrous as it is would’ve made it obvious that I was just fooling around, not making an argument. Apparently that’s not actually the case.
My apologies if I accidentally managed to convince someone of the necessity of monetizing LW’s eyeballs.
I completely misunderstood your post, then. My apologies as well.
If I upvote/downvote comments on an LW page, then close the page a few moments afterwards, sometimes my votes don’t register (they’re not there if I visit the same page later). If something similar happened to you that might explain why your vote seemed to disappear.
I have also seen this bug many times.
This comment thread strikes me as a good example of an anti-pattern I’ve seen before, that I don’t know a name for (close to, but not exactly, privileging the hypothesis), where a conversation slides without explicit comment from reasonably suggesting a bad-case possibility to taking it for granted for no apparent reason.
(disclaimer: I work for Quixey, conflict of interest and all that, but I’m pretty sure I’d be making this exact same comment if I didn’t)
That is good to know. I suspect the probability that I closed the page shortly thereafter is only about .2 or so, but that’s significantly higher than the prior I put on the LW editing staff removing downvotes, which has significantly decreased my worry.
Downvoted the post based on the intervention you described. Normally I’d have upvoted.
I do want to stress that I’m not certain I downvoted the post before I wrote this comment. It’s plausible that 5 people upvoted the post because they wanted it to be visible. That’s still an intervention I’m uneasy about, but the unease is much lower.
At least one of those five people does exist. That’s me, who found the post at −5 and left it at −4.
Seconded. Found it at −1, upvoted to 0. And it’s at −3 now…
What intervention remains if the votes were not distorted?
Basically, if anyone was asked to vote the post up, rather than seeing the post and thinking “I want more of this on LW.” I apologize for not making that implication clearer. I’ve only seen this post at 0 or negative karma (but I’m not tracking it closely), which seems to me like people not wanting it to be negative rather than roughly equal groups liking and disliking it.
I upvoted the post because it had negative karma, and was not a post that I thought should be at negative karma.
In general I vote posts/comments in the direction I think their karma should be at. Thus for instance I downvoted Clippy’s comment above because I did not think it was so insightful that it merited 20+ karma. I would not have downvoted it if it were at 0 karma.
I assume many people take this approach (it fits in nicely with consequentialism) so this probably explains what you saw.
As do I. While it is slightly distracting if LessWrong administrators are giving certain posts preferential treatment against community wishes, it is extremely worrisome if an attacker has convinced them to actually falsify voting records, and indicative of a particularly insidious social engineering attack.