Open & Welcome Thread—September 2020
If it’s worth saying, but not worth its own post, here’s a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here.
Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:
Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don’t think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.
It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn’t sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn’t the right place for Periergo.
Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at −675 karma.
The biggest problem with his participation is that he has a history of dragging people into discussions that drag on for an incredibly long time, without seeming particularly productive, while also having a history of pretty aggressively attacking people who stop responding to him. On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack. It’s first sentence is “This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc.”, and in-particular the framing of “quit/evaded/lied” sure sets the framing for the rest of the post as a kind of “wall of shame”.
Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it’s better for curi to find other places as potential discussion venues.
I do really want to make clear that this is not a personal judgement of curi. While I do find the “List of Fallible Ideas Evaders” post pretty tasteless, and don’t like discussing things with him particularly much, he seems well-intentioned, and it’s quite plausible that he could me an amazing contributor to other online forums and communities. Many of the things he is building over on his blog seem pretty cool to me, and I don’t want others to update on this as being much evidence about whether it makes sense to have curi in their communities.
I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don’t strike me as great contributions to the LessWrong canon, are all low-karma, and I assign too high of a probability that old patterns will repeat themselves (and also that his presence will generally make people averse to be around, because of those past patterns). He has also explicitly written a post in which he updates his LW commenting policy towards something less demanding, and I do think that was the right move, but I don’t think it’s enough to tip the scales on this issue.
More broadly, LessWrong has seen a pretty significant growth of new users in the past few months, mostly driven by interest in Coronavirus discussion and the discussion we hosted on GPT3. I continue to think that “Well-Kept Gardens Die By Pacifism”, and that it is essential for us to be very careful with handling that growth, and to generally err on the side of curating our userbase pretty heavily and maintaining high standards. This means making difficult moderation decision long before it is proven “beyond a reasonable doubt” that someone is not a net-positive contributor to the site.
In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site, and banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.
I wanted to reply to this because I don’t think it’s right to judge curi the way you have. Periergo I don’t have an issue w/. (it’s a sockpuppet acct anyway)
I think your decision should not go unquestioned/uncriticized, which is why I’m posting. I also think you should reconsider curi’s ban under a sort of appeals process.
Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.
You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI’s standards. I think this is problematic.
I’d like to note I am on that list. (like 1⁄2 way down) I am also a public figure in Australia, having founded a federal political party based on epistemic principles with nearly 9k members. I am okay with being on that list. Arguably, if there is something truly wrong with the list, I should have an issue with it. I knew about being on that list earlier this year, before I returned to FI. Being on the list was not a factor in my decision.
There is nothing immoral or malicious about curi.us/2215. I can understand why you would find it distasteful, but that’s not a decisive reason to ban someone or condemn their actions.
A few hours ago, curi and I discussed elements about the ban and curi.us/2215 on his stream. I recommend watching a few minutes starting at 5:50 and at 19:00, for transparency you might also be interested in 23:40 → 24:00. (you can watch on 2x speed, should be fine)
Particularly, I discuss my presence on curi.us/2215 at 5:50
You say:
There are 33 by my count (including me). The list spans a decade, and is there for a particular purpose, and it is not to publicly shame people in to returning, or to be mean for the sake of it. I’d like to point out some quotes from the first paragraph of curi.us/2215:
Notably, you don’t end up on the list if you are active. Also, although it’s not explicitly mentioned in the top paragraph; a crucial thing is that those on the list have left and avoided discussion about it. Discussion is much more important in FI than most philosophy forums—it’s how we learn from each other, make sure we understand, offer criticism and assist with error correction. You’re not under any obligation to discuss something, but if you have criticisms and refuse to share them: you’re preventing error correction; and if you leave to evade criticism then you’re not living by your values and philosophy.
The people listed on curi.us/2215 have participated in a public philosophy forum for which there are established norms that are not typical and are different from LW. FI views the act of truth-seeking differently. While our (LW/FI) schools of thought disagree on epistemology, both schools have norms that are related to their epistemic ideas. Ours look different.
It is unfair to punish someone for an act done outside of your jurisdiction under different established norms. If curi were putting LW people on his list, or publishing off-topic stuff at LW, sure, take moderation action. None of those things happened. In fact, the main reason you’ve provided for even knowing about that list is via the sockpuppet you banned.
Sockpuppet accounts are not used to make the lives of their victims easier. By banning curi along with Periergo you have facilitated a (minor) victory for Periergo. This is not right.
THIS IS A SERIOUS ALLEGATION! PLEASE PROVIDE QUOTES
curi prefers to discuss in public so they should be easy to find and verify. I have never known curi to threaten people. He may criticise them, but he does not threaten them.
Notably, curi has consistently and loudly opposed violence and the initiation of force, if people ask him to leave them alone (provided they haven’t e.g. committed a crime against him), he respects that.
This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.
“a history of threats against people who engage with him” has not been established or substantiated.
I believe he is. As far as I can tell he’s gone to great personal expense and trouble to keep FI alive for no other reason than that his sense of morality demands it. (That might be over simplifying things, but I think the essence is the same. I think he believes it is the right thing to do, and it is a necessary thing to do)
He has gained karma since returning to LW briefly. I think you should retract the part about him having negative karma b/c it misrepresents the situation. He could have made a new account and he would have positive karma now. That means your judgement is based on past behaviour that was already punished.
This is double jeopardy.(Edit: after some discussion on FI it looks like this isn’t double jeopardy, just double punishment. Double jeopardy specifically refers to being on trial for the same offense twice, not being punished twice.)Moreover, curi is being punished for being honest and transparent. If he had registered a new account and hidden his identity, would you have banned him only based on his actions this past 1-2 months? If you can say yes, then fine, but I don’t think your argument holds in this case the only part that is verifiable is based on your disapproval of his discussion methods. Disagreeing with him is fine. I think a proportionate response would be a warning.
As it stands no warning was given, and no attempt to learn his plans was made. I think doing that would be proportionate and appropriate. A ban is not.
It is significant that curi is not able to discuss this ban himself. I am voluntarily doing this, of my own accord. He was not able to defend himself or provide explanation.
This is especially problematic as you specifically say you think he was improving compared with his conduct several years ago.
This alone is not enough. A warning is proportionate.
Unpopularity is no reason for a ban
How is this different to pre-crime?
I think, given he had deliberately changed his modus operandi weeks ago and has not posted in 13 days, this is unfair and overly judgmental.
You go on to say:
What could curi have done differently which would have tipped the scales? If there is no acceptable thing he could have done, why was action not taken weeks ago when he was active?
I believe it is fundamentally unjust to delay action in this fashion without talking with him first. curi has an incredibly long track record of discussion, he is very open to it. He is not someone who avoids taking responsibility for things; quite the opposite. If you had engaged him, I am confident he would have discussed things with you.
It makes sense that you want to cultivate the best rational forums you can. I think that is a good goal. However, again, there were other, less extreme and more proportionate actions that could have been taken first, especially seeing as curi had changed his LW discussion policy and was inactive at the time of the ban.
We presumably disagree on the meaning of ‘high standards’, but I don’t think that’s particularly relevant here.
There were many alternative actions you could have taken. For example, a 1-month ban. Restricting curi to only posting on his own shortform. Warning him of the circumstances and consequences under conditions, etc.
I’m glad you’ve mentioned this, but LW is not a court of law and you are not bound to those standards (and no punishment here is comparable to the punishment a court might distribute). I think there are other good reasons for reconsidering curi’s ban.
I think there is a critical point to be made here: you could have taken no action at this time and put a mod-notification for activity on his account. If he were to return and do something you deemed unacceptable, you could swiftly warn him. If he did it again, then a short-term ban. Instead, this is a sledge-sized banhammer used when other options were available. It is a decision that is now publicly on LW and indicates that LW is possibly intolerant of things other than irrationality. I don’t think this is reflective of LW, and I think it reflects poorly on the moderation policies here. I don’t think it needs to be that way, though.
I think a conditional unbanning (i.e. 1 warning, with the next action being a swift short ban) is an appropriate action for the moderation team to make, and I implore you to reconsider your decision.
If you think this is not appropriate, then I request you explain why 2 years is an appropriate length of time, and why Periergo and curi should have identical ban lengths.
The alternative to pacificity does not need to be so heavy handed.
I’d also like to note that curi has published a post on his blog regarding this ban; I read it after drafting this reply: http://curi.us/2381-less-wrong-banned-me
The above post explicitely says that the ban isn’t a personal judgement of curi. It’s rather a question of whether it’s good or not to have curi around on LessWrong and that’s where LW standards matter.
That seems like a sentiment indicative of ignoring the reason for which he was banned. It was a utilitarian argument. The fact that someone gets downvoted is Bayesian evidence that it’s not valuable for people to interact with him on LessWrong.
If you imprision someone who murdered in the past because you are afarid they murder again, that’s not pre-crime in most common senses of the word.
Additionally even if it would be, LW is not a place with virtue ethics standards but one with utilitarian standards. Taking action to prevent things that are likely to negatively effect LW from happening in the future is perfectly fine with the idea of good gardening.
If you stand in your garden you don’t ask “what crimes did the plants commit and how should they be punished?” but you focus on the future.
Isn’t it even worse then b/c no action was necessary?
But more to the point, isn’t the determination X person is not good to have around a personal judgement? It doesn’t apply to everyone else.
I think what habryka meant was that he wasn’t making a personal judgement.
The traditional guidance for up/downvotes has been “upvote what you would like want to see more of, downvote what you would like to see less of”. If this is how votes are interpreted, then heavy downvotes imply “the forum’s users would on average prefer to see less content of this kind”. Someone posting the kind of content that’s unwanted on a forum seems like a reasonable reason to bar that person from the forum in question.
I agree with “being disliked is not a reason for punishment”, but people also have the right to choose who they want to spend their time with, even if someone who they preferred not to spend time with viewed that as being punished. In my book, banning people from a private forum is more like “choosing not to invite someone to your party again, after they previously caused others to have a bad time” than it is like “punishing someone”.
I’m a fan of solving problems with technology. One way to solve this problem of people not liking an author’s content is to allow users to put people on an ignore list (and maybe for some period of time).
How many people here remember Usenet’s kill files?
You’re using quotes but I am not sure what you’re quoting, do you just mean to emphasize/offset those clauses?
Sure, that might be part of the reason curi hadn’t been active on LW for 13 days at the time of the ban.
(continued)
I don’t know if curi think’s it’s punishment. I think it’s punishment, and I think most ppl would agree that ‘A ban’ would be an answer to the question (in online forum contexts, generally) ‘What is an appropriate punishment?’ That would mean a ban is a punishment.
LW mods can do what they want; in essence it’s their site. I’m arguing:
it’s unnecessary
it was done improperly
it reflects badly on LW and creates a hostile culture to opposing ideas
(3) is antithetical to the opening lines of the LessWrong FAQ (which I quote below). Note: I’m introducing this argument in this post, I didn’t mention it originally.
significant parts of habryka’s post were factually incorrect. It was noted, btw, in FI that a) habryka’s comments were libel, and b) that curi’s reaction—quoted below—is mild and undercuts habryka’s claim.
curi wrote (in his post on the LW ban)
from the FI discussion:
LessWrong FAQ (original emphasis)
I don’t think the things people have described (in this thread) as seemly important parts of LW are at all reflected by this quote, rather, they contradict it.
I am not currently aware of any factual inaccuracies, but would be happy to correct any you point out.
The only thing you pointed out was something about the word “threat” being wrong, but that only appears to be true under some very narrow definition of threat. This might be weird rationalist jargon, but I’ve reliably used the word “threat” to simply mean signaling some kind of intention of inflicting some kind punishment in response to some condition by the other person. Curi and other people from FI have done this repeatedly, and the “list of people who have evaded/lied/etc.” is exactly one of such threats, whether explicitly labeled as such or not.
The average LessWrong user would pretty substantially regret having engaged with curi if they later end up on that list, so I do think it’s a pretty concrete punishment, and while there might be some chance you are unaware of the negative consequences, this doesn’t really change the reality very much that due to the way I’ve seen curi active on the site, engaging with him is a trap that people are likely to regret.
This game-theoretic concept of “threat” is fine, but underdetermined: what counts as a threat in this sense depends on where the the “zero point” is; what counts as aggression versus self-defense depends on what the relevant “property rights” are. (Scare quotes on “property rights” because I’m not talking about legal claims, but “property rights” is an apt choice of words, because I’m claiming that the way people negotiate disputes that don’t rise to the level of dragging in the (slow, expensive) formal legal system, have a similar structure.)
If people have a “right” to not be publicly described as lying, evading, &c., then someone who puts up a “these people lied, evaded, &c.” page on their own website is engaging in a kind of aggression. The page functions as a threat: “If you don’t keep engaging in a way that satisfies my standards of discourse, I’ll publicly call you a liar, evader, &c..”
If people don’t have a “right” to not be publicly described as lying, evading, &c., then a website administrator who cites a user’s “these people lied, evaded, &c.” page on their own website as part of a rationale for banning that user, is engaging in a kind of aggression. The ban functions as a threat: “If you don’t cede your claim on being able to describe other people as lying, evading, &c., I won’t let you participate in this forum.”
The size of the website administrator’s threat depends on the website’s “market power.” Less Wrong is probably small enough and niche enough such that the threat doesn’t end up controlling anyone’s off-site behavior: anyone who perceives not being able to post on Less Wrong as a serious threat is probably already so deeply socially-embedded into our little robot cult, that they either have similar property-rights intuitions as the administrators, or are too loyal to the group to publicly accuse other group members as lying, evading, &c., even if they privately think they are lying, evading, &c.. (Nobody likes self-styled whistleblowers!) But getting kicked off a service with the market power of a Google, Facebook, Twitter, &c. is a sufficiently big deal to sufficiently many people such that those websites’ terms-of-service do exert some controlling pressure on the rest of Society.
What are the consequences of each of these “property rights” regimes?
In a world where people have a right to not be publicly described as lying, evading, &c., then people don’t have to be afraid of losing reputation on that account. But we also lose out on the possibility of having a public accounting of who has actually in fact lied, evaded, &c.. We give up on maintaining the coordination equilibrium such that words like “lie” have a literal meaning that can actually be true or false, rather than the word itself simply constituting an attack.
Which regime better fulfills our charter of advancing the art of human rationality? I don’t think I’ve written this skillfully enough for you to not be able to guess what answer I lean towards, but you shouldn’t trust my answer if it seems like something I might lie or evade about! You need to think it through for yourself.
For what it’s worth, I think a decision to ban would stand on just his pursuit of conversational norms that reward stamina over correctness, in a way that I think makes LessWrong worse at intellectual progress. I didn’t check out this page, and it didn’t factor into my sense that curi shouldn’t be on LW.
I also find it somewhat worrying that, as I understand it, the page was a combination of “quit”, “evaded”, and “lied”, of which ‘quit’ is not worrying (I consider someone giving up on a conversation with curi understandable instead of shameful), and that getting wrapped up in the “&c.” instead of being the central example seems like it’s defining away my main crux.
To elaborate on this, I think there are two distinct issues: “do they have the right norms?” and “do they do norm enforcement?”. The second is normally good instead of problematic, but makes the first much more important than it would be otherwise. I see Zack_M_Davis as pointing out “hey, if we don’t let people enforce norms because that would make normbreakers feel threatened, do we even have norms?”, which is a valid point, but which feels somewhat irrelevant to the curi question.
If I understand you correctly then your primary argument appears to be that a ban is (1) too harsh a judgment where a warning would have sufficed, (2) that curi ought to have some sort of appeals process and (3) that habryka’s top-level comment does not provide detailed citations for all the accusations against curi.
(1) Curi was warned at least once.
(2) Curi is being banned for wasting time with long, unproductive conversations. An appeals process would produce another long, unproductive conversation.
(3) Specific quotes are unnecessary. It blindingly obvious from a glance through curi’s profile and even curi’s response you linked to that curi is damaging to productive dialogue on Less Wrong.
The strongest claim against curi is “a history of threats against people who engage with him [curi]”. I was able to confirm this via a quick glance through curi’s past behavior on this site. In this comment curi threatens to escalate a dialogue by mirroring it off of this website. By the standards of collaborative online dialogue, this constitutes a threat against someone who engaged with him.
Edit: grammar.
lsusr said:
I’m reasonably sure the slack comments refers to events 3 years ago, not anything in the last few months. I’ll check, though.
There are some other comments about recent discussion in that thread, like this: https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=38FzXA6g54ZKs3HQY
gjm said:
I don’t think there is case for (1). Unless gjm is a mod and there are things I don’t know?
lsusr said:
habryka explicitly mentions curi changing his LW commenting policy to be ‘less demanding’. I can see the motivation for expedition, but the mods don’t have to speedrun it. I think it’s bad there wasn’t any communication beforehand.
lsusr said:
I don’t think that’s the case. His net karma has increased, and judging him for content on his blog—not his content on LW—does not establish whether he was ‘damaging to productive dialogue on Less Wrong’.
His posts on less wrong have been contributions, for example, www.lesswrong.com/posts/tKcdTsMFkYjnFEQJo/can-social-dynamics-explain-conjunction-fallacy-experimental is a direct response to of EY’s posts and it was net-upvoted. He followed that up with two more net-upvoted posts:
www.lesswrong.com/posts/HpiTacu2P6c22GEzF/asch-conformity-could-explain-the-conjunction-fallacy
www.lesswrong.com/posts/tKcdTsMFkYjnFEQJo/can-social-dynamics-explain-conjunction-fallacy-experimental
This is not the track record of someone wanting to waste time. I know there are disagreements between LW and curi / FI. If that’s the main point of contention, and that’s why he’s being banned, then so be it. But he doesn’t deserve to mistreated and have baseless accusations thrown at him.
lsusr said:
We have substantial disagreements about what constitutes a threat, in that case. I think a threat needs to involve something like danger, or violence, or something like that. It’s not a ‘threat’ to copy public discussion under fair use for criticism and commentary.
I googled the definition, and these are the two (for
define:threat
)a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
a person or thing likely to cause damage or danger.
Neither of these apply.
I prefer this definition, “a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace”. I think the word “retribution” implies undue justice. A “threat” need only imply retaliation, not retribution, of hostile action.
Evidently yes, as do dictionaries.
This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.
This definition doesn’t describe anything curi has done (see my sibling reply linked below), at least that I’ve seen. I’d appreciate any quotes you can provide.
https://www.lesswrong.com/posts/PkpuvsFYr6yuYnppy/open-and-welcome-thread-september-2020?commentId=H2tyDgoRFov8Xs8HS
This definition seems okay to me.
I don’t know how justice can be undue, do you mean like undue or excessive prosecution? or persecution perhaps? thought I don’t think either prosecution or persecution describe anything curi’s done on LW. If you have counterexamples I would appreciate it if you could quote them.
I don’t think the dictionary definitions disagree much. It’s not a substantial disagreement. thesaurus.com seems to agree; it lists them as ~strong synonyms. the crux is retribution vs retaliation, and retaliation is more general. The mafia can threaten shopkeeps with violence if they don’t pay protection. I think retaliation is a better fitting word.
However, this still does not apply to anything curi has done!
I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough. I do not believe that elaborating upon my reasoning would get you to change your mind about the core disagreement. Elaborating upon my position would therefore waste both of our time.
The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern.
Curi is being banned for wasting time with long, unproductive conversations. It would be ironic for me to embroil myself in such a conversation as a consequence.
I don’t either.
Sure, we can stop.
I don’t know anywhere I could go to find out that this is a bannable offense. If it is not in a body of rules somewhere, then it should be added. If the mods are unwilling to add it to the rules, he should be unbanned, simple as that.
Maybe that idea is worth discussing? I think it’s reasonable. If something is an offense it should be publicly stated as such and new and continuing users should be able to point to it and say “that’s why”. It shouldn’t feel like it was made up on the fly as a special case—it’s a problem when new rules are invented ad-hoc and not canonicalized (I don’t have a problem with JIT rulebooks, it’s practical).
This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determined by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified.
Knowing the existence of the list (again, even if it were justified) would also make me uneasy to talk to curi.
I think this is fair, and additionally I maybe shouldn’t have used the word “truly”; it’s a very laden word. I do think that, on the balance of probabilities, my case does reduce the likelihood of something being foundationally wrong with it, though. (Note: I’ve said this in, what I think, is a LW friendly way. I’d say it differently on FI.)
One thing I do think, though, is that people’s social anxiety does not make things in general right or wrong, but can be decisive wrt thinking about a single action.
Another thing to point out is anonymous participation in FI is okay, it’s reasonably easy to use an anonymous/pseudonymous email to start with. curi’s blog/forum hybrid also allows for anonymous posting. FI is very pro-free-speech.
I think that’s okay, curi isn’t trying to attract everyone as an audience, and FI isn’t designed to be a forum which makes people feel comfortable, as such. It has different goals from e.g. LW or a philosophy subreddit.
I think we’d agree that norms at FI aren’t typical and aren’t for everyone. It’s a place where anyone can post, but that doesn’t mean that everyone should, sorta thing.
I don’t understand this sentence at all. How has he already been punished for his past behavior? Indeed, he has never been banned before, so there was never any previous punishment.
I welcome the transparency, but this “I don’t want others to update on this as being much evidence about whether it makes sense to have curi in their communities” seems a bit weird to me. “a propensity for long unproductive discussions, a history of threats against people who engage with him” and “I assign too high of a probability that old patterns will repeat themselves” seem like quite a judgement and why would someone else not update on this? Additionally, I think that while a ban is sometimes necessary (e.g. harassment), a 2-year ban seems like quite a jump. I could think of a number of different sanctions, e.g. blocking someone from commenting in general; giving users the option to block someone from commenting; blocking someone from writing anything; limiting someone’s authority to her own shortform; all of these things for some time.
The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results.
I also said “I don’t want others to think this is much evidence”, not “this is no evidence”. Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn’t be very surprised to see curi participate well in other online communities.
I also didn’t understand what your sentence was saying. It read to me as “I don’t want people to update on this post”. When you pointed specifically to LW’s culture (which is very argumentative) possibly being a key cause it was clearer what you were saying. Thanks for the clarification (and for trying to avoid negative misinterpretations of your comment).
I am not sure. I really don’t like the world where someone is banned from commenting on other people’s posts, but can still make top-level posts, or is banned from making top-level posts but can still comment. Both of these end up in really weird equilibria where you sometimes can’t reply to conversations you started and respond to objections other people make to your arguments, and that just seems really bad.
I also don’t really know what those things would have done. I don’t think those things would have reduced the uncertainty of whether curi is a good fit for LessWrong super much, and feel like they could have just dragged things out into a long period of conflict that would have been more stressful for everyone.
The “blocking someone from writing anything” does feel like an option. Like, at least you can still vote and read. I do think that seems potentially like the better option, but I don’t think we currently actually have the technical infrastructure to make that happen. I might consider building that for future occasions like this.
Blocking from writing but allowing to vote seems like a really bad idea. Being read-only is already available — that’s the capability of anyone without an account.
Generally I’d be against complicated subsets of permissions for various classes of disfavoured members. Simpler to say that someone is either a member, or they’re not.
Additionally, I’d like to know whether people are warned before they are banned, and whether they are asked about their own view of the matter.
Sometimes people are warned, and sometimes they aren’t, depending on the circumstances. By volume, the vast majority of our bans are spammers, who aren’t warned. Of users who have posted more than 3 posts to the site, I believe over half (and probably closer to 80%?) are warned, and many are warned and then not banned. [See this list.]
Yeah, almost everyone who we ban who has any real content on the site is warned. It didn’t feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.
I think you’re denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)
curi evidently wanted to change some things about his behaviour, otherwise he wouldn’t have updated his commenting policy. How do you know he wouldn’t have updated it more if you’d warned him? That’s exactly the type of criticism we (CR/FI) think is useful.
That sort of update is exactly the type of thing that would be reasonable to expect next time he came back (considering that he was away for 2 weeks when the ban was announced). He didn’t want to be banned, and he didn’t want to have shitty discussions, either. (I don’t know those things for certain, but I have high confidence.)
What probability would you assign to him continuing just as before if you said something like “If you keep continuing what you’re doing, I will ban you. It’s for these reasons.” Ideally, you could add “Here they are in the rules/faq/whatever”.
Practically, the chance of him changing is lower now because there isn’t any point if he’s never given any chances. So in some ways you were exactly right to think there’s low probability of him changing, it’s just that it was due to your actions. Actions which don’t need to be permanent, might I add.
I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that. But it seems to me like you’re focusing on the benefit to him / “is there any chance he would get better?”, as opposed to the benefit to the community / “is it reasonable to expect that he would get better?”.
As stewards of the community, we need to make decisions taking into account both the direct impact (on curi for being banned or not) and the indirect impact (on other people deciding whether or not to use the site, or their experience being better or worse).
I’m not sure about other cases, but in this case curi wasn’t warned. If you’re interested, he and I discuss the ban in the first 30 mins of this stream
I agree to your first paragraph.
Whether someone is “good fit” already should be visible by the Karma (and I think Karma then translates into Karma points per Vote?) and I don’t see why that should additionally lead to a ban or something. A ban, or a writing ban, could result for destructive behavior.
I think there is no real point in having people blocked from reading. Writing—ok (though after all things start out as personal blog posts in any case and don’t have to be made frontpage posts).
FYI I am on that list and fine with it—curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto
I think you’re wrong on multiple counts. Will reply more in a few hours.
FYI and FWIW curi has updated the post to remove emails and reword the opening paragraph.
http://curi.us/2215-fallible-ideas-post-mortems and http://curi.us/2215-fallible-ideas-post-mortems#18059
I don’t recall learning in school that most of “the bad guys” from history (e.g., Communists, Nazis) thought of themselves as “the good guys” fighting for important moral reasons. It seems like teaching that fact, and instilling moral uncertainty in general into children, would prevent a lot of serious man-made problems (including problems we’re seeing play out today). So why hasn’t civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton’s Fence, and teaching it widely would make the world even worse off on expectation?
I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:
We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. … That we are also very confused about normativity and meta-ethics and don’t really know what we mean by “should”, including in this document...
Yeah, I realize this would be a hard sell in today’s environment, but what if building Friendly AI requires a civilization sane enough to consider this common sense? I mean, for example, how can it be a good idea to gift a super-powerful “corrigible” or “obedient” AI to a civilization full of people with crazy amounts of moral certainty?
Non-dualist philosophies such as Zen place high value on confusion (they call it “don’t know mind”) and have a sophisticated framework for communicating this idea. Zen is one of the alternative intellectual traditions I alluded to in my controversial post about ethical progress.
The Dao De Jing 道德经, written 2.5 thousand years ago, includes strong warnings against ontological certainty (and, by extension, moral certainty). If we naïvely apply the Lindy Effect then Chinese civilization is likely to continue for thousands more years while Western science annihilates itself after mere centuries. This may not be a coincidence.
Here is the manifesto you are looking for:
Unfortunately, the duality of emptiness and form is difficult to translate into English.
States evolve to perpetuate themselves. Civilization has figured it out (in the blind idiot god sense of “figured it out”) that moral uncertainty is teachable and decreases trust in the state ideology. You have it backward. The states in existence today promote moral certainty in children for exactly the same reason the Communist and Nazi states did.
I expect it is this. General moral uncertainty has all kinds of problems in expectation, like:
It ruins morality as a coordination mechanism among the group.
It weakens moral conviction in the individual, which is super bad from the perspective of people who believe there are direct consequences for a lack of conviction (like Hell).
It creates space for different and possibly weird moralities to arise; I don’t know of any moral systems that think it is a good thing to be a member of a different moral system, so I expect all the current moral systems to agree on this one.
I feel like the first bullet point is the real driving force behind the problems it would prevent, anyhow. Moral uncertainty doesn’t cause people to do good things; it keeps them from doing good things (that are different from other groups’ definitions of good things).
This is sort of a rehash of sibling comments, but I think there are two factors to consider here.
The first is the rules. It is very important that people drive on the correct side of the road, and not have uncertainty about which side of the road is correct, and not very important whether they have a distinction between “correct for <country> in <year>” and “correct everywhere and for all time.”
The second is something like the goal. At one point, people thought it was very important that society have a shared goal, and worked hard to make it expansive; things like “freedom of religion” are the things civilization figured out to both have narrow shared goals (like “keep the peace”) and not expansive shared goals (like “as many get to Catholic Heaven as possible”). It is unclear to me whether we’re better off with moral uncertainty as generator for “narrow shared goals”, whether narrow shared goals is what we should be going for.
I would guess that teaching that fact is not enough to instill moral uncertainty. And that instilling moral uncertainty would be very hard.
Often expressing any understanding towards the motives of a “bad guy” is taken as signaling acceptance for their actions. There was e.g. controversy around the movie Downfall for this:
Wouldn’t more moral uncertainty make people less certain that Communism or Nazism were wrong?
That’s definitely how it was taught in my high school, so it’s not unknown.
Did it make you or your classmates doubt your own morality a bit? If not, maybe it needs to be taught along with the outside view and/or the teacher needs to explicitly talk about how the lesson from history is that we shouldn’t be so certain about our morality...
We want to teach children to accept the norms of our society and the narrative we tell about it. A lot of what we teach is essential pro-system propaganda.
Teaching moral uncertainty doesn’t help with that and it also doesn’t help with getting students to score better on standardized tests which was the main goal of educational reforms of the last decades.
Compulsory education is an organ of the state. Nation-states evolve to perpetuate their own existence. Teaching moral uncertainty is counter-productive toward maintaining the norms of a nation-state.
I guess it’s because high-conviction ideologies outperform low-conviction ones, including nationalistic and political ideologies, and religions. Dennett’s Gold Army/Silver Army analogy explains how conviction can build loyatly and strength, but a similar thing is probably true for movement-builders. Also, conviction might make adherents feel better, and therefore simply be more attractive.
If I had to guess, I’d guess the answer is some combination of “most people haven’t realized this” and “of those who have realized it, they don’t want to be seen as sympathetic to the bad guys”.
The full-text version of the Embedded Agency sequence has colors! And it’s not just in the form of an image, but they’re actually embedded as text. Is there any way a normal LW user can do the same with any of the three editors? (I.e., LW docs, Draft-JS, or Markdown.)
Alas, not. The reason is a bit silly. I can enable text-colors in our editor, but this has the unintended side-effect of now copying over the text-color from wherever you are copying your text from, even the shade of black that that other program uses, which is hard to spot, but ends up looking kind of unsettling on LessWrong. Since the vast majority of posts are just written in normal “black-or-grey on white” text colors, the cost of that seemed larger than the ability to allow people to use colored text.
Eventually we could probably do something clever, like filtering out grey shades of text when you copy-paste it into the editor, but I haven’t gotten around to that, though PRs are always welcome.
Apparently OpenAI has sold Microsoft some sort of exclusive licence to GPT-3. I assume this is bad for the prospects of anyone else doing serious research on it.
Is there visible reporting on this?
Some. Microsoft’s blog post; OpenAI’s blog post; article in The Verge; article in Engadget; article in VentureBeat; article in MIT Technology Review.
Yup, https://www.theverge.com/2020/9/22/21451283/microsoft-openai-gpt-3-exclusive-license-ai-language-research
I recently realized that I’ve been confused about an extremely basic concept: the difference between an Oracle and an autonomous agent.
This feels obvious in some sense. But actually, you can ‘get’ to any AI system via output behavior + robotics. If you can answer arbitrary questions, you can also answer the question ‘what’s the next move in this MDP’, or less abstractly, ‘what’s the next steering action of the imaginary wheel’ (for a self-driving car). And the difference can’t be ‘an autonomous agent has a robotic component’.
The essential difference seems to be that the former system only uses its output channels whenever it is probed, whereas the second uses them autonomously. But I don’t ever hear people make this distinction. I think part of the reason why I hadn’t internalized this as an axis before is that there is the agent vs. nonagent thing, but actually, those are orthogonal to each other. We clearly can have any of the four combinations of {agent, nonagent} × {autonomous, non-autonomous}.[1]
It’s a pretty bad sign that I don’t know without looking at the definition whether ‘tool AI’ refers to the entire bottom half or just the bottom-left quadrant. With looking, it seems to be just the latter.
What led me to this was thinking about Corrigibility. I think it is applicable to the entire top half, all agent-like systems, but it feels like a stronger requirement for the top right, autonomous agents. If you have an oracle, then corrigibility seems to reduce to ‘don’t try to influence user’s behavior through your answers’.
When I look at this, I am convinced by the arguments that we probably can’t just build Tool AI, but I super want the most powerful systems of the future be non-autonomous. That just seems to be way safer without sacrificing a lot of performance. I think because of this, I’ve been thinking of IDA as trying to build non-autonomous systems (basically oracles), even though the sequence pretty clearly seems to have autonomous systems in mind.[2] On the other hand, Debate seems to be primarily aimed at non-autonomous systems, which (if true) is an interesting difference.
So is all of this just news to me, and actually everyone is aware of this distinction?
And if you added a third axis for ‘robotic/non-robotic’, we would end up with examples in all eight areas.
I award myself an F- for doing this.
Two existing suggestions for how to avoid existential risk naturally fall out of this framing.
Go all the way to the left (even further than the picture implies) by giving the AI no output channels whatsoever. This is Microscope AI.
Go all the way to the bottom and avoid all agent-like systems, but allow autonomous systems like self-driving cars. This is (as I understand it) Comprehensive AI Services (CAIS).
I’m going on a 30-hour roadtrip this weekend, and I’m looking for math/science/hard sci-fi/world-modelling Audible recommendations. Anyone have anything?
Golden raises $14.5M. I wrote about Golden here as an example of the most common startup failure mode: lacking a single well-formed use case. I’m confused about why someone as savvy as Mark Andreessen is tripling down and joining their board. I think he’s making a mistake.
If anyone happens to be willing to privately discuss some potentially infohazardous stuff that’s been on my mind (and not in a good way) involving acausal trade, I’d appreciate it—PM me. It’d be nice if I can figure out whether I’m going batshit.
So which simulacrum level are ants on when they are endlessly following each other in a circle?
Over, and over… the pheromones… the overwhelming harmony...
Do those of you who live in America fear the scenarios discussed here? (“What If Trump Loses And Won’t Leave?”)
I do, at least. I don’t think “What if trump loses and wont’ leave” is the best summary of my concern; the best summary is “What if the election is heavily disputed.”
“What if Trump Loses...” is just the title of the article, but the article also discusses scenarios where “Biden might be the one who disputes the result”.
I do not know whether this has already been mentioned on Lesswrong, but 4-6 weeks ago you could read in German news websites that commercially available mouth wash has been tested to kill coronavirus in the lab and the (positive) results have been published in Journal of Infectious Diseases.
You can click through this article to see the ranked names of the mouth wash brands and their “reduction factor” though I found the sample sizes seemed quite small. You can also find a list in this overview article. In an article I saw today on this topic, the author warned against using the stuff permanently because it also kills the desirable part of your oral flora. But it was suggested that it may help once you are infected, and may possibly help prophylactically (of course only in the sense of helping when you are possibly infected).
I’m so bored of my job, I need a programming job that has actual math/algorithms :/ I’m curious to hear about people here who have programming jobs that are more interesting. In college I competed at a high level in ICPC, but I got into my head that there are so few programming jobs with actual advanced algorithms that if your name on topcoder isn’t red you might as well forget about it. I ended up just taking a boring job at a top tech company that pays well but does very little for society and is not intellectually stimulating at all.
Have you read https://www.benkuhn.net/hard/ ? Curious what you think. (Disclosure: I started the company that Ben works for, which does not have hard eng problems but does have a high potential for social impact)
I feel happy pulling up kattis and doing some algorithm questions so there is definitely joy to be had chasing technical questions. Ben doesn’t seem to be disputing that but is offering two other things you can chase.
I don’t know if this is different person to person but for me gamifying a problem can make me care more about something but it can’t make me care about something I don’t care about at all
This has been in my head for months because everyone* gives a variation of this advice and it feels like it’s missing the hard part. It started when I saw a clip on Reddit of Dr. K from Healthy Gamer saying something along the lines of “If you don’t know what you want to do, get a piece of paper and write down everything wrong with the world. In 5 minutes the paper will be almost full” and… What? No? I mean, things are problems in that they make people’s lives worse. But I notice very very little actually changes how I feel. So why would I expect anything I do to change how someone else feels if nothing they do can change how I feel? There are only two axis that actually change how I feel about life: lonely VS belonging and bored VS engaged. I don’t really have a reason to expect other people are very different except that people in worse life situations also have an unsafe VS secure axis. So the problems are “loneliness” and “listlessness”. Everyone acts like there are important problems everywhere. You see people saying ideas for side projects are a dime a dozen but here I am where I actually have the funds to quit and make something I thought had value and just nothing I can think of that seems to have any value.
*Everyone except one friend on Paxil who assures me the solution to my problem is Paxil and one friend who is convinced LSD is the solution to all problems. I remain unconvinced.
Quantitative finance has use for people who know advanced math and algorithms. (Though they are not known for doing great good for society.)
You can also get around this problem by starting your own ML startup. (I did this.) The startup route takes work and risk tolerance but provides high positive externalities for society.