Open Thread May 2 - May 8, 2016
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Has Eugine’s mass-downvoting got more aggressive for everyone lately, or just for me? I am getting hit for 10 points or so per day; not only old comments but (I think) every comment I post, without exception.
[EDITED to add:] Of course by “everyone” I mean all Eugine’s targets. Actually I don’t know who else he’s gunning for at the moment; perhaps it’s just me.
Perhaps some of those downvotes are from other people and/or reflect actual deficiencies in what I post. But I bet the great majority are just Eugine being Eugine.
[EDITED to add:] Actually, this is interesting. At least some of my comments that are net-positive have lots of downvotes, in some cases more than seems plausible “organically”. E.g., this one appears to be on +7-5; I’m not sure it really deserves +7, but I’m extremely sure it doesn’t deserve −5. This one appears to be +6-5; a natural −5 seems more plausible here but still unlikely. This one is on +3-4. Some more, all mysteriously on just enough downvotes to come out negative: +1-2, +3-4, +2-3, +2-3, +2-3, +3-4, +2-3, +4-5, +3-4, +2-3, +3-4, +3-4. That’s twelve consecutive comments from my overview page, all of which just happen to be on exactly −1 despite substantial numbers of votes overall.
I expect some of those downvotes are honest downvotes. But I reckon (p=0.9) Eugine has a new strategy (well done, Eugine! Such creativity!): instead of downvoting everything once, downvote everything to −1 unless you run out of sockpuppets. (And we have weak evidence that Eugine has exactly 5 downvoting sockpuppets right now.)
[EDITED again to add:] Actually, it’s more like 30 points per day right now. I wonder what fraction of all upvotes and downvotes on LW in the last week have been Eugine.
[EDITED again to add:] About 45 points in the last 8 hours; my 30-day karma is now on −39 which is something like +305-344. “Normally” I think something like 5-10% of votes on my comments are downvotes, so I guess maybe 330 or so of those downvotes are Eugine’s. (Current estimated time to zero karma, assuming no relevant moderator action: maybe 8 months.)
Thanks for posting this. I’ve forwarded it to tech support.
How do you find new accounts?
I haven’t myself noticed a lot of new accounts other than ones I’ve already reported as likely-Eugines, and one other that I’m keeping an eye on—you might want to ask OrphanWilde, who is the one who reported seeing a lot of new accounts.
When I do notice new accounts it’s simply by seeing things in the “Recent Comments” written by users whose names I don’t recognize.
(If mods don’t have a tool for listing recently created accounts, that should go on whatever monstrous wishlist we have for LW features...)
it is on the wishlist.
I watch the new comments list, primarily, and check usernames I don’t recognize for posting histories.
Over the past week a larger-than-normal number of new accounts have appeared; none of them have exhibited Eugine’s usual behaviors, so thus far I’m just observing. If they have a tendency to write until they reach 10 karma and then stop, well, they’re probably silent puppets.
To my eye, there are… an unusual number of new accounts jumping immediately into posting, lately. None of them have Eugine’s trademark style or focus on his preferred topics, however.
I would be unsurprised to find that some of them are Eugine.
There’s an obvious solution, which I propose in a spirit of impartial generosity: Insta-ban any account that downvotes any of my comments :-).
(Horrifically, that might actually be an improvement on the present state of affairs. I hope it’s unnecessary to say that it would still be an absolutely terrible idea, but I’ll say it anyway just in case.)
I am keeping an eye on the individuals, at any rate. It will be interesting if he’s adopting a new tactic of -not- talking about the same tired talking points. It would suggest a level of cleverness he thus far has not demonstrated.
And once we get the tools in place to start tracking downvote patterns, that game will be up, too.
Related xkcd
My question is: why the heck are you such a dangerous person to Eugene? What point of view do you hold that Eugene deems so worthy of mass downvoting? Ironically for him, now I want to know.
At this point I think it’s mostly a personal vendetta on his part. But back when he wasn’t just downvoting practically everything I ever post, his mass-downvoting was usually triggered by my having the temerity to disagree with him about one of his three hot-button issues: (1) whether black people are stupid, lazy and dangerous, (2) whether women are mentally unsuited for science, engineering, etc., and (3) whether transgender people should be called “trannies”, addressed by their “old” pronouns, etc.
(Eugine would not necessarily express his positions in the way I have suggested there. But e.g. when presented with a list of highly successful black people—after he suggested there are no successful black people for a “black pride” event to celebrate—he described them as “basically dancing bears”. Make of that what you will.)
You forgot something: Eugine holds that anyone who disagrees with these views is insufficiently rational and doesn’t belong on Less Wrong.
He decided at one point that there were too many such irrational people, and engaged in a mass-downvote campaign to punish his ideological enemies; he was banned for this, and keeps coming back, like a sad dumb little puppy who can’t understand why he gets punished for shitting on the carpet.
I’m curious; did you choose that analogy on purpose?
Anyway: yes, I agree, I think Eugine thinks that lack of enthusiasm for bigotry ⇒ denial of biological realities ⇒ stupid irrationality ⇒ doesn’t belong on LW, and that’s part of what’s going on here. But I am pretty sure that Eugine or anyone else would search in vain[1] for anything I’ve said on LW that denies biological realities, and that being wrong about one controversial topic doesn’t by any means imply stupid irrationality—and I think it’s at least partly the personal-vendetta thing, and partly a severe case of political mindkilling, that stops him noticing those things.
[1] And not only because searching for anything on LW is a pain in the (ahahahaha) posterior.
Not in that regard, no. I’m actually vaguely in favor of the Sad Puppies, as I think that Larry Correia has some significant points, although I think, as with Correia, that the point was already made, and at this point I regard it as largely a political exercise.
Which is to say, I agree with the original purpose of demonstrating that there is a bias at play (given the staunch denials that such a bias existed), but have little interest in their efforts at fighting the bias. I don’t care about the award, I don’t care who it goes to; it was never a selling point to me, and never will be.
The Rabid Puppies… I find boring and childish. They hopped on the bandwagon entirely to piss off people they enjoy pissing off.
Reminds me of:
A relevant note is that “dancing bear” is a trope.
I’d actually meant to link to that exact page, but forgot.
Eugine likely sees gjm as pro-SJW
Remark: a policy of pushing all someone’s comments down to exactly −1 is worse (for LW, whether or not for the victim) than a policy of downvoting them all n times, for specific n, because it erases information. Suppose I post a stupid comment that someone votes down to −1, and an insightful one that gets up to +4; then along comes Eugine, leaves the first alone and votes the other one down to −1. And now they look exactly the same; Eugine has removed not only the evidence of my insight in the second case, but also the evidence of my stupidity in the first.
The information isn’t completely gone; from any comment whose net score isn’t zero and whose total number of votes isn’t too large you can reconstruct the upvote and downvote numbers by looking at the “% positive” figure. But that doesn’t distinguish between Eugine-downvotes and other downvotes, and e.g. the parent of this comment which is currently on (+2,56%) could be either +9-7 or +10-8. And, more to the point, “can be roughly determined by doing some calculation” is rather different from “can be seen immediately”; even in so far as the information isn’t lost, it’s severely obscured.
The more successful Eugine is in making karma uninformative, the less grounds he has for hoping that mass-downvoting someone will ruin their reputation or drive them away. But I doubt he’s thinking clearly enough or long-term enough for this to change his behaviour.
… Whether the parent of this is on +9-7 or +10-8, that’s more than 5 downvotes. Either Eugine is now using more downvote-socks, or some other people dislike it. The latter hypothesis is plausible: I can easily imagine someone reading it and thinking “oh, for God’s sake get this meta whining off LW”. FWIW, my reasons for not shutting up about it are (1) I think it’s of some anthropological interest, (2) I still have some hope that some day it will provoke the moderators into actually stopping and/or undoing Eugine’s mass-downvoting, and (3) I have reason to think that unfortunately Eugine’s behaviour has been instrumental in driving a bunch of people away from LW and hope that keeping it visible will make it less effective and therefore less likely to push people away. And, I confess, (4) I hope to mitigate any reputational effect his attacks might have on me by reminding readers that if it weren’t for Eugine’s vendetta my 30-day karma would be somewhere around +220 rather than +12 :-). Anyway, if anyone reading this has strong opinions about whether I should shut up about this stuff, I’d be interested to know them.
He is hitting Nancy, myself, yourself, Gleb, and possibly others. ETA: Other suspected recipients of his impotent rage: ChristianKI (low-priority target?).
All of which I find… tactically stupid. If he targeted me, well, I don’t care, and nobody else would care. (And hell, I’m the one who riled him up, so it would even make something like sense.) Targeting you and Gleb is targeting bystanders; that is likely to produce administrative response.
Targeting Nancy as well, however?
I poked the tiger. It decided to try to maul the dragon in response.
I don’t think he’s targeting me because you riled him up; I’ve been on his list for ages. It’s possible that he’s got more aggressive lately because you annoyed him, but it could also be e.g. because he’s had a lot of his identities banhammered lately.
It doesn’t look to me as if he’s targeting Nancy in general; I think he’s downvoting negative things said about him, and it happens that many of them are from Nancy. (For the obvious reason.)
Eugine’s original ban was for mass-downvoting. All subsequent action against him has been more because he keeps coming back despite his ban. I have never yet seen any administrative response that actually has any impact on his mass-downvoting. I assume this is, at least in part, because dealing with that requires more difficult investigation (grovelling through LW’s horrible database structure trying to figure out who’s voted how on what, searching out sockpuppets, etc.) and the people with technical custody of LW are time-starved.
(I wonder occasionally whether the most effective way to deal with the Eugine problem is to find a security hole that makes it possible to filch the LW database files, which could then be analysed without having to go via Trike. I am fairly sure that with actual direct access to the data, it would be pretty easy to identify all of Eugine’s socks and what votes they’ve cast. At the very least they could then be terminated with extreme prejudice; undoing all their votes might require writing a bit of code and therefore another Trike interaction. For the avoidance of doubt, I am not in fact proposing to hack LW in such a fashion. Not in the near future, anyway.)
I dunno. My mental model of Nancy is that she isn’t the sort to switch from inaction to action just because she starts being personally targeted.
[EDITED to add:] By “inaction” I don’t mean “doing literally nothing”, I mean “neither stopping nor undoing the karmabuse”. The moderators are clearly being fairly active in slapping down visible Eugine accounts.
Nancy isn’t the dragon, necessarily; she’s far too nice. The dragon is in that people like her, and her getting targeted is going to upset some people.
Hell, I’m annoyed.
How will that do Eugine any harm? He seems to have mostly given up on actually posting visibly Eugine-y things, and to be concentrating on mass-downvoting his enemies with a shadowy network of sockpuppets. Ordinary LW users can’t do very much about that however annoyed they may be.
I suppose they could engage in mass-upvoting of comments from users he’s been targeting, but that wouldn’t in any useful sense solve the problem and would probably just result in Eugine gradually accumulating more sockpuppets to drown out their noise with more of his own.
Eugine still cares. You don’t carry out a multi-year campaign to try to subvert a site you don’t care about.
He’s extremely sensitive about his ideas, and how people here regard them.
There are a bunch of “comment score below threshold” comments on this thread. Those comments are reasonable polite comments, mostly about the current difficulties with karma abuse here.
I hope to eventually prevent karma abuse, and finding out who’s been downvoting discussion of karma abuse should be part of the process.
agreed.
Most of you are probably annoyed by the sudden focus on Eugine; why is Less Wrong focusing so much on one person? Isn’t that just giving him what he wants?
Well, to answer the first question, we’re not focusing on Eugine; I’m currently mostly poking him in my off-time using low-effort strategies with particular goals in mind. If I decided to wage war on Eugine, no-holds-barred, I’d start with an upvote brigade; any individual identified as being targeted by Eugine would be targeted far more effectively by my bots, with a 10:1 upvote ratio, and targeted downvotes at his sockpuppets. And I’d work to be sanctioned by the admins, meaning my brigade wouldn’t suffer attrition the way his sockpuppet army would.
Even that would be low-effort. It’d take about an hour of coding, and another hour to register all the accounts. (Somewhat longer would be getting administrator approval to break the rules.) If I really wanted to get him, I’d pull down the source code for Less Wrong and create tools to find his bots and disable them. It wouldn’t even be difficult.
As for the second question, of whether focusing on him is giving him what he wants: Some of you weren’t around for his first downvote campaign. Let’s not kid ourselves: Eugine won, and Less Wrong was left crippled; he already GOT what he wanted. Many people left as a result of his campaign, and Less Wrong entered something of a downward spiral from which it never fully recovered. The people who left were all people who disagreed with his views, and this has created a bias which has, through the slow weight of upvotes and downvotes, become acculturated here.
And he’s never stopped pushing Less Wrong in his favored direction. I’ve kicked the hornet’s nest, mostly because that’s what I do, and he’s more active at the moment—but make no mistake, he’s never stopped being active. As demonstrated by his recent campaign, he’s never actually given up the methods he used the first time around.
Less Wrong 2.0 won’t fix the problem, and as long as he’s playing this game, he’s deciding the direction Less Wrong leans—by pushing his finger down whenever the scale stops favoring him. Whatever rules are put in place, he’ll ignore or attempt to game. He’s waging war on those of you he disagrees with—he’s been waging war on you for years—but now it’s noticeable. I think most people have noticed now.
Nancy has done an excellent job of nuking his accounts as they’ve made themselves known, but the tools do not yet exist to truly finish him off.
So. Any other strategies? Do mind that what you write here, he’ll as likely as not read. I’m engaging him, but I can’t say what my strategies are. (Insofar as I specify my own strategies, I’m writing the strategy I want Eugine to read, and respond to. Yes, this includes this entire comment; I want Eugine to read this, and more, I want him to know that I know that he knows what I’m up to. Or at least, he thinks he does.)
This is rude to say, but I honestly believe that the technical support of LW does not give a fuck about Eugine, and their cooperation is lukewarm at best. Otherwise the problem would be already solved years ago.
Really, how difficult it would be create a script that would revert all Eugine’s votes? Let’s suppose it would take a week of work. So? More than hundred weeks have already passed, and nothing happened.
Without cooperation of the technical support, there is not much a moderator could do, other than playing whack-a-mole with the new accounts. Which, as we see, does not work, because Eugine just creates new accounts, and the downvotes made by the old ones stay there.
For what it’s worth, I think tech support cares somewhat, but not enough for a gung ho effort.
Call me when they at least revert Eugine’s votes from his known accounts. Or just tell me your probability estimate it will happen before the end of 2016. :(
I think you’re correct, but it may be more accurate to say that the technical support of LW doesn’t give a fuck about LW generally. My vague memory is that they are doing this for free, which is nice for them but doesn’t exactly give them a lot of motivation to keep things running well.
Suppose Eugine is destructive enough that everyone gives up on LW and they close it down. For LW tech support, that’s a successful outcome: they don’t have to bother with it any more.
I think that’s nastier than necessary—tech support has been giving some help. The problem is that they aren’t willing to develop new tools.
If other people make the necessary tools, are they willing to deploy them?
I’ve asked tech about this.
I don’t think Eugine wanted to destroy LW at that point in time.
I take it OW meant not “Eugine wanted to destroy LW, and got what he wanted” but “Eugine wanted to make LW unpleasant for people with sociopolitical opinions very different from his and drive them away from LW, and got what he wanted—and that destroyed LW”.
I agree that Eugine surely didn’t want to destroy LW at that point. I have no idea what he wants to do to it now.
I imagine he might want it to become a “more right” forum (maybe he believes that after “weeding out” all wrongthinkers it would happen automatically), but that seems unlikely to happen.
At this moment, I guess the choices are merely: (a) LW will somehow fix things and get rid of Eugine; (b) LW will continue as usual, including the annoyment over Eugine’s games; or (c) a better debating forum will appear and people will move there.
small-group politics is as mindkilling as large-group politics. I’d like to hear a lot less about the topic (though I do support software changes to make bad actors less harmful, such as tracking votes to be able to undo banned-account voting, and soft-bans where the target doesn’t realize it’s banned—it can vote and post, but nobody else ever sees).
I don’t agree that he’s had all that much impact, I was around for the original harassment—it was annoying, but didn’t change the direction of movement—the diaspora had already started. It may have accelerated things a hair.
The difference now may be that LW has lost enough thought leaders and original posters for “finger on the scale” manipulation to actually have an effect. I’d argue that to the extent it’s true, we’re already dead.
You probably underestimate the number of new users—the ones who posted their first five or ten comments, received −1 karma on each, and left the website because they felt like the community dislikes them (while in reality their only “sin” was e.g. mentioning to be women in one of those comments) -- who in alternative reality could have produced useful content for the website.
I agree that downvoting crusades and lower quality of content are mostly two separate problems that need to be addressed separately. But on some scale, one bad thing contributes to another.
You’re likely correct—turning off a new user who decides to keep posting elsewhere rather than making LW more interesting is a serious harm. While I hope most users (new and old) pay more attention to comments and replies than votes, that’s not how some are wired.
My impression is that it accelerated the departure of lefty and/or female LWers by more than a hair.
There really isn’t that much on LW about this—if it seems like a lot, I think it’s more because there’s so little other content on LW.
That was actually done to Eugine at one point. He quickly noticed it, and freaked out.
As far as I know, it actually wasn’t done, it was just Eugine’s way to create more drama. He sometimes tries new strategies (such as reposting his old comments using new accounts), and this could be one of them. Or he was genuinely mistaken; it’s hard to tell with this kind of person.
Oh, really? That’s funny. I’m disappointed—for all Eugine’s faults, I’d thought he was generally honest and intelligent, but this seems like good evidence of serious failure on at least one of those.
I am almost sure that Nancy Lebovitz shadow-banned The_Lion at some point, as his comments showed up on his user page but not in their context (including my inbox).
No, I didn’t. I’ve got a comment somewhere that I didn’t think shadow-banning would work on anyone who was paying attention.
Also, I don’t have the tools needed for shadow-banning.
Well, not sure how to explain that, but I still find Eugine’s hypothesis that he was shadowbanned and then at some moment Eliezer himself intervened and removed the shadowban quite unlikely.
My model of Eliezer’s approach to moderation assigns very low probability to this whole story. And if I believe that half of the story was made up, I have no reason to trust the other half.
There are two separate claims: (1) He was hell-banned; and (2) EY personally intervened to un-ban him.
Is there evidence that this didn’t happen? I am, too, more suspicious about EY intervening, but regarding the first claim nobody (in particular, Nancy) jumped up and said that Eugine is making shit up and no one actually hell-banned him.
...I honestly don’t understand why it matters.
He was banned. Hell-banning seems appropriate, given that he continued to try to skirt the ban.
Hell-banning does not work at all with people who use sockpuppets. So you may argue that it was justified, but it still wasn’t the appropriate tool for the job.
Granted. I guess I’m puzzled as to why its use or non-use ultimately matters?
Well, she said it now.
(Linking here because the whole debate is downvoted, so it’s easy to miss new comments.)
Yes, it does seem that Eugine had a paranoid episode or something and started to imagine things. Or it was a really bad attempt at getting public sympathy :-/
Any specifics?
One lefty female comes to mind, but I believe she left LW basically because she didn’t find NRx (and possibly HBD) pushback acceptable. It was more like she didn’t want to be in the same forum with people holding such views.
Such departures, IMHO, cannot and should not be helped.
Is there anyone who left LW specifically because of karma harassment?
It’s hard to tell; people don’t usually bother saying why they’re going. But I can offer someone saying they almost left because of a single incident of mass-downvoting. And daenerys (who has since left LW) saying that mass-downvoting is discouraging her from participating much, though at that point she evidently had no plans to leave altogether.
And, over on Slate Star Codex (where there are no links to individual comments; sorry), go to this thread and search for “Because I got mod-bombed” you’ll find ialdabaoth saying that’s why they left LW; if you read other comments near that one you’ll find a bunch of other people saying they left and/or are considering leaving because they don’t like how it feels to get heavily downvoted; they aren’t (I think) talking about Euginification, but if (1) it’s common to be pushed away from places like LW because being heavily downvoted is unpleasant, and (2) there is someone around throwing heavy downvoting at people whose politics he doesn’t like, there’s an obvious conclusion to draw.
I don’t know the politics (or, in several cases, the gender) of the people I’m pointing at, so I am not going to claim them as examples of “lefty and/or female LWers” specifically; but, again, if we have evidence (1) that mass-downvoting encourages people to leave and (2) that mass-downvoting is preferentially targeted at those who are lefty and/or female, then there’s an obvious conclusion to draw.
[EDITED to add:] I am pretty sure I remember other people saying things like “I got mass-downvoted and it makes me feel really negative about LW and I hardly post here any more”, but the above is all that a few minutes’ googling turned up and more research than that seems unwarranted. Also, while looking I found this study of the impact of voting on user behaviour, which doesn’t find that downvoting drives people away (but doesn’t, I think, look at all at the sort of mass-downvoting LW suffers from); I am linking it here (1) because cherry-picking is bad and (2) because it’s an interesting paper anyway.
I’m a conservative, so I might be biased, but the notion that Lesswrong is culturally unwelcome to lefties strikes me as not just wrong, but funny. In any given scan of the site, I’ll see 3-4 things that offend me.
Threads will contain, not as the point of the thread, but just as background noise, as assumptions with which the writer presumes everyone will agree, atheism, pro choice stuff, polyamory (usually with same sex stuff relationships in there), discussions of how cool it will be once we turn out bodies into robots, etc.
I recognize that it is possible that the site is somehow also offensive to progressives and I simply miss out on all of the conservative talking points because they are transparent to me (fish don’t see the water that they swim in, etc.), but I don’t think that’s the case.
I’m not claiming that LW is generally hostile to lefties, nor that there aren’t things that happen here that might annoy righties or push them away, nor that overall it’s worse for lefties than for righties. Only that one particular thing that happens here makes LW more unpleasant for lefties than it need be and drives some away.
(I would prefer LW to be a place where people with any political proclivities at all can feel welcome, unless those proclivities are severely and overtly anti-rational or so obnoxious as to render them unwelcome pretty much everywhere.)
I agree with this if you simply look at the site as it is, but the kind of movement that gjm is talking about has certainly happened, and Eugene’s downvoting may have contributed to that.
Some years ago, if you even mentioned religion or a culturally conservative practice without saying something negative about it, you would very likely be downvoted. I’m pretty sure that even happened to gjm on at least one occasion—he was downvoted and added, “I don’t see what’s wrong with this comment,” and I’m pretty sure it was downvoted just because he didn’t add something negative when he mentioned religion.
That is obviously not the case anymore with religion. And I just recently was giving some arguments favoring a policy of no sex before marriage, without that kind of result. Of course people still disagreed, but they didn’t object to the fact that someone was arguing that point.
So it seems to me true that there has been a substantial amount of movement, even if it is still true overall that LW is more leftwing than not.
A comment’s date and time is a permalink to that comment. Here’s Ialdabaoth’s “mod-bombed” comment.
D’oh! Thanks.
(I have a feeling I’ve made the same mistake before and had it pointed out before. Perhaps I’ll remember next time.)
An interesting thread. My overwhelming impression from it is that people left LW because it stopped being interesting.
Some people did. Some people left for other reasons. One of those reasons was disliking getting downvoted a lot. In one case, it was specifically disliking getting mass-downvoted by Eugine. Which happens to be what you asked for.
(I agree that most people who have left LW have left for reasons other than getting mass-downvoted by Eugine. I hadn’t thought that was under any sort of dispute.)
“Mod-bombed” is strange expression. I find it probable that at least some people left LW because of karma harassment. However my impression stands—what made LW barren is people leaving because it stopped being interesting. But judging by the volume of discussion about particular reasons for leaving, you’d never guess that :-/
And vice versa.
LW would probably still be interesting if certain people (e.g. Eliezer and Yvain) still regularly posted here.
And they left because they were done with their respective projects, and maybe because of negative comments.
Eliezer had said the things he was planning to say with the sequences and had found new research fellows to start working on AI again.
Yvain was a sock puppet that Scott used on a role playing forum he was active on and did some LW posts as backstopping. Then he continued posting here for a while but writing without politics and not under his own name felt too much like hard work. Now his entire blog is pseudonymous because writing about politics under your own name is not such a good idea, but it is still all about political conversations rather than the sciency stuff he did as Yvain.
But it was not about Eugine or downvotes because they always got much more upvotes than downvotes on damn near every comment and every post.
Yep. It’s a negative feedback loop, there’s a reason it’s known as the death spiral.
I know one such person offline. Could be the only one, could be more of them. We don’t know.
Though I enthusiastically endorse the concept of rationality, I often find myself coming to conclusions about Big Picture issues that are quite foreign to the standard LW conclusions. For example, I am not signed up for cryonics even though I accept the theoretical arguments in favor of it, and I am not worried about unfriendly AI even though I accept most of EY’s arguments.
I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists. I’m not a cryonicist because I don’t think companies like Alcor can survive the long period of stagnation that humanity is headed towards. I don’t worry about UFAI because I don’t think our civilization has the capability to achieve AI. It’s not that I think AI is spectacularly hard, I just don’t think we can do Hard Things anymore.
Now, I don’t know whether my pessimism is more rational than others’ optimism. LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don’t talk about politics. Is there a way we can discuss civilizational issues without becoming mind-killed? Or do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?
I’m sympathetic to the idea that we can’t do Hard Things, at least in the US and much of the rest of the West. Unfortunately progress in AI seems like the kind of Hard Thing that still is possible. Stagnation has hit atoms, not bits. There does seem to be a consensus that AI is not a stagnant field at all, but rather one that is consistently progressing.
Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated. Societies that are dynamic and competent in one area, such as physics research, will also be dynamic and competent in other areas, such as infrastructure and good governance.
What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world’s best research and technology in the field of microbiology. Or we might observe that Indonesia had the best set of laws, courts, and legal knowledge. Such observations would falsify my hypothesis.
If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly. One obvious thing that could derail American innovation is catastrophic social turmoil.
Optimists could accept the civilizational competence correlation idea, but believe that US competence in areas like infotech is going to “pull up” our performance in other areas, at which we are presently failing abjectly.
Soviet Russia did very well with space and nukes. On the other hand, one of the reasons it imploded was that it could not keep up doing very well with space and nukes.
I think the correlation you’re talking about exists, but it’s not that strong (or, to be more precise, its effects could be overridden by some factors).
There is also the issue of relative position. Brain drain is important and at the moment US is the preferred destination of energetic smart people from all over the world. If that changes, US will lose much of it’s edge.
I used to think that Soviet Union was worse in economy, but at least better at things like math. Then I read some books about math in Soviet Union and realized that pretty much all mathematical progress in Soviet Union came from people who were not supported by the regime, because the regime preferred to support the ones good at playing political games, even if they were otherwise completely incompetent. (Imagine equivalents of Lysenko; e.g. people arguing that schools shouldn’t teach vectors, because vectors are a “bourgeoise pseudoscience”. No, I am not making this one up.) There were many people who couldn’t get a job at academia and had to work in factories, who did a large part of the math research in their free time.
There were a few lucky exceptions, for example Kolmogorov once invented something that was useful for WW2 warfare, so in reward he became one of the few competent people in the Academy of Science. He quickly used his newly gained political powers to create a few awesome projects, such as the international mathematical olympiad, the mathematical jurnal Kvant, and high schools specializing at mathematics. After a few years he lost his influence again, because he wasn’t very good at playing political games, but his projects remained.
Seems like the lesson is that when insanity becomes the official ideology, it ruins everything, unless something like war provides a feedback from reality, and even then the islands of sanity are limited.
What were these books? I don’t speak Russian, so I’ll probably follow up with: who were a few important mathematicians who worked in factories?
I’ve heard a few stories of people being demoted from desk jobs to manual labor after applying for exit visas, but that’s not quite the same as never getting a desk job in the first place. I’ve heard a lot of stories of badly-connected pure mathematicians being sent to applied think tanks, but that’s pretty cushy and there wasn’t much obligation to do the nominal work, so they just kept doing pure math. I can’t remember them, but I think I’ve heard stories of mathematicians getting non-research desk jobs, but doing math at work.
Masha Gessen: Perfect Rigour: A Genius and the Mathematical Breakthrough of the Century
This is a story about one person, but there is a lot of background information on doing math in Soviet Union.
Thanks! Since that’s in English, I will take at least a look at it.
Gessen does not strike me as a reliable source, so for now I am completely discounting everything you said about it, in favor of what I have heard directly from Russian mathematicians, which is a lot less extreme.
Many of the same people worked on both projects. In particular, Keldysh’s Calculation Bureau.
I’m sure they’re correlated but not all that tightly.
I think there are some pretty good examples. The soviets made great achievements in spaceflight and nuclear energy research in spite of having terrible economic and social policies. The Mayans had sophisticated astronomical calendars but they also practiced human sacrifice and never invented the wheel.
I doubt it, but even if true it doesn’t save us, since plenty of other countries could develop AGI.
A LWer created Omnilibrium for that.
Any results? (I am personally unimpressed by the few random links I have seen.)
Sure there is. Start with the usual rationalist mantra: what do you believe? Why do you believe it?
How would you describe this Great Stagnation? Why do you believe we are headed towards this?
And let us pick up from there.
I don’t think “we don’t talk about politics” is true to the extend that people are going to have blind spots about it. Politics isn’t completely banned from LW. There are many venues from facebook discussions with LW folks, Yvain’s blog, various EA fora and omnilibrium that also are about politics.
I think we even had the question of whether people believe we are in a great stagnation in a past census.
How do you know? Did you actually look at the relevant census numbers to come to that conclusion? If so, quoting the numbers would make your post more data driven and more substantial. If you goal is to have important discussion about civilizational issues being more data driven can be quite useful.
The humanity or just the West?
I don’t see why not.
That, too. That large error bar of uncertainty isn’t going to go away even if we talk about the issues :-)
What skills are overwhelmingly easier to learn in institutionalized context?
(e.g math wouldn’t count, because even if motivation is circumvented as an issue in institutions, you should be theoretically to study everything at home. Neither would necessarily the handling of some kind of lab equipment, if there was some clear documentation available for you, and (assuming that you took the efforts to remember it) if the transfer to practice was straightforward (so pushing buttons and changing settings would be straightforward, while the precise motions of carving a specific kind of motive into wood would be less so))
In practice, learning to handle certain lab equipment outside of an institutional context is sometimes hard because it’s much easier to break expensive stuff if you don’t have someone looking over your work the first few times you do something. Of course, you qualified your above statement quite well, so you haven’t said anything incorrect. :)
Heh.
Probably saying the obvious, but anyway:
What is the advantage of nice communication in a rationalist forum? Isn’t the content of the message the only important thing?
Imagine a situation where many people, even highly intelligent, make the same mistake talking about some topic, because… well, I guess I shouldn’t have to explain on this website what “cognitive bias” means… everyone here has read the Sequences, right? ;)
But one person happens to be a domain expert in an unusual domain, or happened to talk with a domain expert, or happened to read a book by a domain expert… and something clicked and they realized the mistake.
I think that at this moment the communication style on the website has a big impact on whether the person will come and share their insight with the rest of the website. Because it predicts the response they get. On a forum with a “snarky” debating culture, the predictable reaction is everyone making fun and not even considering the issue seriously, because that’s simply how the debate is done there. Of course, predicting this reaction, the person is more likely to just avoid the whole topic, and discuss something else.
Of course—yes, I can already predict the reactions this comment will inevitably get—this has to be balanced against people saying stupid things, etc. Of course. I know already, okay? Thanks.
Speaking as somebody who frequently engages in non-nice methodologies:
Niceness is more convincing. Way more convincing. And if you can get somebody to be mean enough to you, while you’re being nice, that somebody feels like they should defend you, cognitive dissonance will push them to believe in your beliefs a little bit more.
If some people are nice, and some people are mean, we’re injecting some very subtle irrationality into people reading our discourse.
So there is an advantage in picking one and sticking to it. (Or my policy, which is to match the tone of my opponent as well as I can.) And niceness is probably an easier schelling point that meanness.
A Suite of Pragmatic Considerations in Favor of Niceness
Yep. I’ll try to make a short summary of some arguments in the article and comments:
Why people want to be mean:
it signals strength (in the ancient environment it shows you are not afraid of being hit in return);
it signals intellectual superiority e.g. in the form of sarcasm;
if you already have a reputation, you can win debates quickly;
it helps you put distance towards people you want to avoid.
What are the negative impacts of meanness:
you may be wrong, but you have already proposed a solution (“the other person is stupid”);
if there is a misunderstanding, hostile reaction lowers the chance of explaining or increases the time needed, compared with a polite request for clarification;
people will different experience will seem especially wrong to you, so this effect will be even stronger there;
you spread bad mood, which harms curiosity and exploration;
you signal that you are bad at cooperation, bad at managing your emotions, not caring about other people;
people stop listening to you and start avoiding you;
you lose possible allies.
Content is multi-level. A chunk of text often means more than the literal reading of the words.
People use forums for many things. Sometimes it’s to inform, sometimes it’s to set out a position, sometimes it’s to vent and bitch, sometimes it’s to just wave a dick around, sometimes it’s to play social games, etc. It helps to figure out quickly to which category a message belongs and the style or tone of the message (here: nice or mean) is important. Think of it as a fuzzy tag, an email header line, a hint at how this message should be interpreted.
It’s not simple, of course, and there is a lot of misdirection and false flags and signaling and counter signaling… basically, it’s humans communicating :-)
Is there anything in your post where you think that a likely reader doesn’t already know what you are arguing?
That seems like arguing against a strawman.
I am looking for sources of semi-technical reviews and expository weblog posts to add to my RSS reader; preferably 4—20 screenfuls of text on topics including or related to evolutionary game theory, mathematical modelling in the social sciences, theoretical computer science applied to non-computer things, microeconomics applied to unusual things (e.g. Hanson’s Age of Em), psychometrics, the theory of machine learning, and so on. What I do not want: pure mathematics, computer science trivia, coding trivia, machine learning tutorials, etc.
Some examples that mostly match what I want, in roughly descending order:
Theory, Evolution, and Games Group: http://egtheory.wordpress.com/
colah’s blog: http://colah.github.io/
Math ∩ Programming: http://jeremykun.com/
Some journals in the Annual Reviews and Nature Reviews series (but these are a hit-or-miss), in particular Annual Review of Statistics and Nature Reviews Neuroscience;
“Primer” articles in the PLOS journals.
How do I go about finding more feeds like that? I have already tried the obvious, such as googling “allintext: egtheory jeremykun” and found a couple OPML files (including gwern’s), but they didn’t contain anything close. The obvious blogrolls weren’t helpful either (most of them were endless lists of conference announcements and calls for papers). Also, I’ve grepped a few relevant subreddits for *.wordpress.*, *.blogspot.* and *.github.io submissions (only finding what I already have in my RSS feeds — I suspect the less established blogs just haven’t gotten enough upvotes).
Lesswrong.com and the facebook group were very quiet this week. (The slack doubled in volume to be around 18k messages this week)
Any ideas why?
Possibly just random? There’s a feedback effect where if LW is quiet one day, there’s less to respond to the next day so it is likely to remain quiet—so I think smallish random fluctuations can easily produce week-long droughts or gluts.
More and more people have drifted away and not been replaced by active posters. There are still a few topics around more traditional LW topics but not attracting much discussion. The most active discussions seem to be around a single member whose attempts at disruption have been entirely successful at multiple levels. Some of the more prolific remaining are judged, via downvotes and commentary, to be of low quality, and little contentful discussion ensues. There are still a few debates or arguments outside meta topics but they are mostly covering familiar ground.
LW is not a well kept garden any longer, one may wonder whether it is even a garden. LW2.0 is often mentioned as a glorious future but it’s looking pretty bleak around LW1.0 in the present.
As with many areas; The future could not come soon enough.
Maybe students in some universities have midterms?
Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.
The numbers appear to be more or less made up.
http://nostalgebraist.tumblr.com/post/143718406034/the-future-of-humanity-institute-seems-very
That seems like an accurate analysis.
I’m actually more concerned about an error in logic. If one estimates a probability of say k that in a given year that climate change will cause an extinction event, then the probability of it occurring in any given string of years is not the obvious one, since part of what is going on in estimating k is the chance that climate change can in fact cause such an incident.
I realize if I Pomodoro most things, instead of some thing, I feel more motivated to go through my to do list. Sorry if this is already obvious. I tend to do Pomodoros on repetitive, long-term, open-ended tasks like studying, practicing or working.
I’d refrained from doing any Poms on short-term goals, that are uncertain in time it takes, it may take longer than an hour but less than 8 hours, for example researching health insurance; I feel unmotivated to start it because I know it’s going to take a long time but not too long, but I don’t know how long, so I procrastinate. Putting on my list to do 2 poms of research on health insurance, then reassess if I need more poms, feels more motivating.
If I had to guess why I had a tendency to leave smallish tasks off my pom list, I would guess I was being arrogant in thinking I had the will power to just out right do these tasks with out resorting to poms.
BBC News is running a story claiming that the creator of Bitcoin known as Satoshi Nakamoto is an Australian named Craig Wright.
People on Hacker News and reddit.com/r/bitcoin are sceptical.
Do we know which country Wright was living in during 2010?
Meta: I got the date wrong of the last OT, modified it to say 25-1st, and this thread runs 2nd-8th
It just got a lot cheaper and easier to do amino acid builds and mods. With a helpful AGI, you could have designer drugs for pennies per design.
http://phys.org/news/2016-04-molecule-building-method-vast-realm-chemistry.html
edit Paper http://science.sciencemag.org/content/early/2016/04/20/science.aaf6123
DIY http://www.kurzweilai.net/garage-biotech-new-drugs-using-only-a-computer-the-internet-and-free-online-data
I apologize in advance for asking an off-topic question, but my Google-fu has failed me.
My girlfriend’s niece is a Small Child who likes to turn the volume on her Android tablet all the way up, making it too loud for everyone else. How can we make it so that when she tries to make the tablet louder, nothing happens? (I know how to do this on an iOS device but not an Android one.)
I use Volume Locker to keep myself from changing volume by accidentally pressing buttons when picking up my phone.
Thanks, that seems to be working for now...
have you looked for apps that will do this? something that does the effect of “twilight” on screens but for volume. Have you checked the parental control tools? Have you considered getting the kid a hearing test?
Seconding getting the kid a hearing test. Alternatively, speech therapy, if the issue is that she cannot understand what’s being said.
Look for kid headphones with maximum child safe volume levels.
If she’s smart enough to understand words then just tell her not to do it. Take away the tablet whenever she disobeys.
If she’s too young for that, tape over the part of the thing that she could press, or just hang it out of reach playing something happy.
Tried the first thing. She doesn’t listen—the result would be she never keeps the tablet for very long.
How long did you try? It took me like 2 weeks to teach my nephews to do what I said in a similar case (keeping tv turned down instead of tablet). You need the parents cooperation too.
It’s sometimes difficult to take the tablet away immediately. A typical scenario is that my girlfriend and I are in the front seat of the car while the Small Child sits in a booster seat in the back and wants to use the tablet; she’ll fight to keep it and it’s hard to reach around the chair to take it out of her hand. Also there’s the fact that the Small Child consistently breaks promises—she’ll agree not to make it loud to get the tablet back, but immediately turn up the volume anyway when I give it to her. A technical solution is easier than playing dog trainer to a child with a developmental disability...
I did an exercise in generating my values.
A value is like a direction—you go north, or south. You may hit goal mountains and hang a right past that tree but you still want to be going north. Specifically you may want to lose weight on the way to being healthy, but being healthy is what you value. This was from a 5-10 minute brainstorm, pen+paper session (with a timer) in one of our dojos. I kinda don’t want it to be for just my benefit; so I figured I would share it here; they are in no order.
My values rot13:
Haqrefgnaq ubj guvatf jbex
Yvir ybat, Urnygul
Unir rabhtu jrnygu gb yvir jvgubhg jbeel sbe zl shgher (naq zl snzvyl’f shgher)
Perngr guvatf bs inyhr gb zr be bguref—Neg, Jevgvat, Pbafgehpgvbaf, jbbq/ryrpgebavp, Prenzvpf
Uryc jvgu gur gbbyf bs gur shgher
Haqrefgnaq ubj V jbex rabhtu fb gung V pna jbex gbjneqf nyy bs zl inyhrf
qb guvatf V rawbl
unir gur cbjre gb or serr gb qb nf V cyrnfr
yrnir n yrtnpl (ivn perngvat guvatf)
uryc crbcyr ol oevatvat gurz gbtrgure
unir vasyhrapr jura V jnag vg
or npxabjyrqtrq (fhotbny gb yrtnpl)
Nibvq nqqvpgvba, fgntangvba, cnva, ybaryvarff, wnvy, qrog, qehtf.
Xabjyrqtr - (fhotbny gb qbvat gur guvatf V jnag gb qb)
Or ebznagvpnyyl unccl/shyshyyrq
Hopefully this gets you thinking about doing the exercise once for yourself. Also some ideas came from The list of common human goals that I wrote a while back.
NZ epidemiologist Pearson A.L appears to have predicted the trans-pacific partnership in 2014: Although such a case may have no strong grounds in existing New Zealand law, it is possible that New Zealand may in the future sign international trade agreements where such legal action became more plausible. - British Medical Journal
Why do I, as a desperate male, lonely and horney level desperate, stave of the attention of females when I’m not the one leading the charge? One of my peak experiences was visiting Torquay on an undergrad uni field trip walking with the sexiest girl I’d ever met. A busker was playing ‘I’m a believer’ at a market. It was magical. After the field trip she invited me to a coffee date—I agreed. I never took the initiative from there, and nothing happened. I had spent a week fantasising about her and enjoying her company, but her sexual aggression was somewhat intimidating. The same happened with someone I struggle to appreciate, recently, a girl who flirts with me on an ongoing basis.
My housemate said having a strong feeling of I don’t want to be like my parents will make me more like them. I wonder if that’s true? Is trying to be less neurotic self defeating?
CFMEU has a new slogan: ‘every battle makes us stronger’. Looks like smart advertising from a group that’s under constant fire.
Reframe log
Instead if seeing people moving through crowds as antagonistic see them as compatible want (not wanting to collide with you)
instead of seeing strangers around me as potentially violent threats see them as potential defenders
behavioural insight, modification
Stop doing those sloppy back slap drum roll hugs Carlos!
I used topsy turvy photo icons in my science presentation. I thought it looks kooky and kitsch. It looked dumb. As they say: ironic shitposting is still shitposting.
Given that the trans-pacific partnership negotiations started in 2008 and were first sheduled to end in 2012 predicting it in 2014 seems like a feat that doesn’t have much to do with predictions but just with being up to date about what’s currently negotiated.
Well, what are your beliefs and feelings about intimacy and sex? If you imagine yourself accepting the offers, what would it mean about you? Imagine it like a movie, and then what your parents (or other important people) would say about that.
(I suspect there is something negative, either directly about you e.g. “if you don’t lead, then you are weak”, or about the girl and then indirectly about you e.g. “if she initiates, she is a slut; and you are a loser if you date a slut”.)
This is a complicated clusterfuck and I don’t know where to begin
I would feel kinda ashamed
I feel I can totally identify with this suggestion. But I’m not sure if that’s just cause I’m suggestable.
Thank you so much for you insight.
I’ve had similar reactions in the past. There are a couple reasons, I think. Fear or rejection of the unknown, of jumping into new social situations. Nearsightedness in wanting everything to go perfectly the first time so much that you don’t get practice at making things go well. Fear of exposing myself to rejection, coupled with harder to describe feelings of low romantic or sexual worth. The feeling that you don’t really know for absolutely sure that you want to spend a ton of time with the person you’re flirting with, so you shouldn’t follow through.
Two things have helped me with this. The first is increasing my self-worth a little. You can probably think of men less physically attractive than you who have had perfectly happy relationships. Try to understand what makes them attractive people (I tend to think of this as “falling in love” in miniature). In fact, I’ve found this exercise of trying to see the lovable in other people is a pretty good one in general. Anyhow, you can do this on yourself too. You have plenty of good points, I guarantee it.
The second thing was just jumping into those novel social situations. I have a mantra for it, even: “I would regret not doing it, therefore I will do it.”
I suppose so
Other experiences support this hypothesis in my case
yep
I don’t want to get attached to someone that’s gonna burn me! :(
That’s a very compelling case. Thank you. And, I feel more positive about other people now too :)
I guess it’s time to pull up that backlist of people I have a vauge interest in… ;)
The “simulation argument” by Bostrom is flawed. It is wrong. I don’t understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn’t make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation’s creators. If we’re not in a simulation, we’re not in a simulation. Either way, the simulation argument is flawed.
First, Bostrom is very explicit that the conclusion of his argument is not “We are probably living in a simulation”. The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won’t reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.
Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It’s worth noting that these two are claims about our universe, not about some parent universe.
In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom’s reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn’t apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.
There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom’s response mathematically precise would be a good way to track down the flaw (if any).
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
So I am struggling to understand his reply to my argument. In some ways it simply looks like he’s saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren’t in a simulation.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be “real minds” dwelling in “real brains”, and some would be simulated.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
Right. When I say “his conclusion is still true”, I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not “we are living in a simulation”.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom’s conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that’s all you’re claiming, then you’re not disagreeing with the simulation argument.
The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are “true” I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber “real” minds, then it’s likely we are all simulated. I’m not really sure how us being “accurately simulated” minds changes things. It does make it easier to reason outside of our little box—if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.
Let’s assume I’m trying to make conclusions about the universe. I could be a brain in a vat, but there’s not really anything to be gained in assuming that. Whether it’s true or not, I may as well act as if the universe can be understood. Let’s say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it’s impossible to reason your way into believing you’re in a simulation. It’s self-referential.
I’m going to have to think about this harder, but try and criticise what I’m saying as you have been doing because it certainly helps flesh things out in my mind.
I don’t think that’s true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.
If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don’t get to the point of simulating minds or they choose not to run a significant number of simulations.
If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.
This is why, when Bostrom describes the Simulation Argument, he focuses on “ancestor-simulations”. In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).
So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators’ ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.
You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.
While I do not agree on the conclusion of the simulation argument, I think your rebuttal is flawed: we can safely reason about the reality outside simulation if we presume that we are inside a realistic simulation, that is a simulation whose purpose is to mimic as closely as possible the reality outside. I don’t know if it’s made explicit in the exposition you read, but I’ve always assumed the argument was about a realistic simulation. Indeed, if the law of physics are computable, you can have even have an emulation argument.
Of course you can. Anyone who talks about any sort of ‘multiverse’ - or even causally disconnected regions of ‘our own universe’ - is doing precisely this, whether they realize it or not.
No. Think about what sort of conclusions an AI in a game we make would come to about reality. Pretty twisted, right?
It sounds like you expect it to be obvious, but nothing springs to mind. Perhaps you should actually describe the insane reasoning or conclusion that you believe follows from the premise.
We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent’s memory.
There is no limit to how perverted a view of the world a simulated agent could have.
Hm. Let me try to restate that to make sure I follow you.
Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka “ancestral simulations”, and (Esw) simulated environments that dont’t closely resemble Er, aka “weird simulations.”
The question is, is my current environment E in Er or not?
Bostrom’s argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa).
Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing.
Have I understood you?
On Fox News, Trump said that regarding Muslims in the US, he would do “unthinkable” things, “and certain things will be done that we never thought would happen in this country”. He also said it’s impossible to tell with absolute certainty whether a Syrian was Christian or Muslim, so he’d have to assume they’re all Muslims. This suggests that telling US officials that I’m a LW transhumanist might not convince them that I have no connection with ISIS. I’m not from Syria, but I have an Arabic name and my family is Muslim.
I’ve read Cory Doctorow’s Little Brother, and this might be a generalization from fictional evidence, but I can’t help asking: As a foreign student in the US, how likely is Trump to have me tortured for no reason? Should I drop everything and make a break for it before it’s too late? Initially, many Germans didn’t take Hitler’s extremist rhetoric seriously either, right? (If I get deported in a civilized manner, well, no harm done to me as far as I’m concerned.)
I normally assume, as a rule of thumb, that politicians intend to fulfill all their promises. If a politician says he wants to invade Mars, that could be pure rhetoric, but I’d typically assume that he might try it in the worst case scenario. I have observed it is often the case that when we think other people are joking, they are in fact exaggerating their true desires and presenting them in an ironic/humorous light.
Seems like you’re just falling for partisan media histrionics and conflating a lot of different things out of context.
In context, Trump is giving a tough-sounding but vague and non-committal response to questions about whether there should be a digital database of Muslims in the country. He later partially walked this back, saying it was a leading question from a reporter and he meant we should have terrorism watch lists. Which obviously already exist.
I’d say it’s about as likely as you giving yourself a heart attack reading political outrage porn.
Thanks, I guess. I knew he was talking about a digital database, but I was wondering if it could have been a dogwhistle for something else. I don’t have a favorable opinion of human decency in general.
FWIW, that wasn’t a political comment. I hardly ever read or watch anything political. Some TV clips were shown to me by an acquaintance and I wanted an honest assessment of what he had told me it was about. I don’t have any opinions on the subject myself.
This is a horrible rule of thumb. It’s not anywhere close to true, and even if it were, their ability lags their intent by orders of magnitude. Instead, assume that politicians will very slightly alter existing trends in order to encourage their constituents.
I suspect you are at somewhat higher risk of being targetted by officials for your foreign-ness than you were last year. Trump being president will somewhat increase as well, but more because it’ll be a sign that the general populace is more racist than we thought than because of any actual policy change.
I think it’s really unlikely you’d be imprisoned or tortured, with or without Trump, unless there are stronger ties to enemy groups than just your nationality.
I assume that because I read on the SEP that strategic voting skews results in democracies. The rule of thumb is more like a Schelling point than a lower order rational principle. I said that’s what I usually do because I’m aware it’s not very applicable in this context since I’m not voting in these elections, but it’s a habit I’ve indulged in for years, unfortunately.
If I were in a pedantic mood, I’d say that the results skew because of bad voting mechanisms (state-level electors and first-past-the-post decisions) that encourage strategic voting, rather than directly from strategic voting.
Still, the electoral skew isn’t what you should fear, nor the actual election outcome. The signalling of the populace that such ideas are acceptable to a significant degree is very scary. It’s up to you just how personally to take the fear, and how to react to a risk increase from a small fraction of a percent to a less-small fraction of a percent.
I can imagine if you’re an activist or particularly stand out as a target group, or just a nervous person, it might be justified to maintain an exit plan you can execute over the course of few days if something changes your estimate of personal danger to the measurable range.
Which ideas? After John Yoo’s memos on torture, Snowden, assassination-by-drone as an entirely routine matter, Guantanamo, etc. what exactly is new and scary to you?
New and scary is the degree to which it’s become normal and accepted in mainstream press and the general populace. People with power have always been horrible, but until recently they’ve had to do it in secret and say they’re sorry when they get caught.
So… if we’re talking presidents, this goes straight to Bush and Obama. I would say Obama in particular because he was supposed to be a bulwark against such things.
However we are discussing why is Trump scary. Why is he more scary than status quo or, say, Hillary? There is a pronounced trend towards a police state, Trump isn’t going to stop it, but then I don’t see anyone who would and who has a chance at getting to a position where he could.
“As a foreign student in the US, how likely is Trump to have me tortured for no reason?”
It’s hard to judge, but I think having a pro-torture president will make use of torture by the police more likely. My feeling is that you aren’t in clear and present danger, and institutional changes take time.
You are not as safe as someone with a non-Arabic name.
My feeling is that you don’t need a go bag, but you might as well start researching other places which would be good for you to live.
Hitler had a huge party of supporters behind him that he spend a decade gathering around him. Trump on the other hand is much more of an one-man show. One of the biggest role of the president is making personal choices and there simply no comparable pool of talent. Under a Trump administration someone like Chris Christie who’s a long-term friend of the Trump family is likely going to get a post in his administration.
When it comes to totalitarism it’s a mistake to assume that the past will repeat exactly the same way. It’s hard to believe a US government would simply torture random people with Arabic names intentionally just because they have Arab names. It’s more likely that privacy will get completely eroded. Today we have face recognition that’s strong enough to hook up all camera’s on streets to it and get general movement profiles. Forbidding encryption would also be on the table.
Thanks, I’m basically ignorant about contemporary American politics. (But I’ve read Tocqueville. This is probably not a desirable state of affairs.)
Effective careers
One line summary of What is a fulfilling career? (part 1) @ Cambridge University:
autonomy, clear tasks, feedback and variety --> engaging;
meaningful;
do work that helps others;
good relationships with colleagues and social support;
fair pay; not long commute; not excessive hours;
fits with the rest of your life
--> job satisfaction
subject matter of the work (e.g. your passion is sports, something is a non-neutral cause choice) is actually irrelevant!
Systematic reviews
Pubmed recently spat out:
So, next time I will learn to search pub-med before other databases. to identify if your wildcards are overly general for my search strategy. Reckon that’s a good approach?
Lobbying
World Coal Association says: ‘The power of high efficiency coal—The most cost-effective way to mitigate CO2 emissions’ here
The World Coal Associationis non-profit so perhaps we shouldn’t fetishise the term non-profit, or the term coal?
I couldn’t parse this. What do you mean?