Mapping the Archipelago
I got excited reading Meta-tations on Moderation: Towards Public Archipelago for two reasons: there’s a clear island of the archipelago I’ve been mostly avoiding on LessWrong, and the whole place has been growing at a high enough rate to demand fracturing.
Since we have the chance to direct the growth of the brand new archipelago, let’s start a discussion down one level of meta: what specific islands do you want to see? Second, how should discussion and moderation norms differ between them?
Three islands of current LW, according to me:
AI Risk: Serious discussion for serious folk. No smiles allowed.
Instrumental Rationality: The means to get to the ends. Whatever they might be.
Fluff and Fiction: Blood for the Art God! Fun over fact.
(Guess which island I avoid.)
I was mostly attracted to LW for epistemic rationality, and I think epistemic rationality (rather than life advice or creativity) is LW’s main advantage compared to all other online communities. It was also the subject of many posts by Eliezer, including the sequence Mysterious Answers which the wiki calls “probably the most important core sequence”. But your list doesn’t mention it, and indeed it doesn’t get much discussion on LW lately, though people like Abram and Scott Garrabrant (and me) are trying. I wonder how things ended up like this.
This comment made me update hard on epistemic rationality being an important thing we should continue to discuss more. I’m also not sure what to do about it.
I suppose because so much of the low hanging fruit has already been picked. The greatest limiting factor on most people here is not their epistemics. Though we definitely need some people to keep working on this irregardless.
It seems to me that the limiting factor is still epistemic rationality (or rather, relative power of epistemic rationality vs other factors), and moreover we are regressing. For example, old LW was able to say confidently that religion isn’t true and move on from that. We can’t do anything like that today.
I would be surprised if you could not say religion isn’t true and move on from that.
Also, I generally agree with your perspective that the best content on LW is about epistemic rationality (and usually tends more towards theoretical rationality than practical rationality, if I use the distinction Scott introduced below). And am interested in incentivizing more content in that direction.
I also think a lot of the best content on LW was of the form of fact-posts and cross-pollination of a large swath of separate existing bodies of knowledge (in the style of Sarah Constantin’s fact posts, Luke’s literature reviews and Scott’s analysis of various theoretical topics), which doesn’t really fit into any of the categories and is more related to something like “empirical big-picture studies on how the world functions”.
We can’t? Do you have in mind the Mythic Mode post or something like that?
Do you suppose this could be a tension between epistemic and instrumental rationality, where ‘religion’ is recast as ‘social organization for value promotion and individual welfare’, rather than as assertions about the factual nature of the doctrine? It’s entirely possible I am simply missing the pro-faith posts/comments because I am dismissing them at a level beneath the one that I notice, but I have observed two things in the community over time:
1) In the Sequences, religion was specifically cited as worthy of emulation: directly related to this post and the Meta-tations post was the comparison with explanation-less contributions at temple, and also the comparison to the Catholic Church in the context of charity.
2) I noticed posts which simply re-cast religious preferences in the language of instrumental rationality, which appeared to boil down to asserting that belief-in-belief was rational.
In the old LW immediately before I made the switch to LW2, there were posts appearing which expressly advocated supernatural practices as instrumentally rational, with little disagreement. That was when I personally jumped ship for LW2.
Very interested in links of the last type.
To me, the best content in the old Less Wrong does not fit on any of you islands. It was developing theoretical rationality.
It would probably go on on near the AI safety island, but I feel like that is not fairly representing its generality.
Note: It seems easy to conflate theory with epistemic and practice with instrumental. I think this is a bad combination of buckets, and when I say theory here, I do not exclude theoretical instrumental rationality.
Those islands you describe seem real (although “no smiles allowed” doesn’t resonate). But I think the sort of islands I’m looking for are arranged on a different axis that this.
I want to talk about AI, and Instrumental Rationality, and Questionable Practices That Might Help Your Epistemics Or Might Destroy Them, and High Adventure, and Blood for the Art God, and a bunch of other things – they all seem fairly connected to me (since I’m a person, and everything I do is connected in some way).
The Islands I’m interested in are “the island where everyone is required to be good at X, but not required to be good at Y” and “the island where everyone is required to accept norm/paradigm Z because life is short and we can’t argue about things forever”, etc.
My underlying island that I’d like to cultivate, regardless of which object level topic we’re discussing, is something like:
Accomplishing a Goal. We’re not just talking for no reason, we’re talking because we’re brainstorming ideas or figuring out how to learn the current best practices or building an understanding of something. The goal might vary, which might change some of the norms that are ideal for the job.
Leveling Up. In addition to solving an object level goal, we’re trying to grow in the process (at epistemics, at learning, at doing)
Disagreements are there to be resolved, and knowledge built upon. 10 years ago it was still a thing that I’d run into people arguing about creationism, I’m deeply glad I don’t have to worry about that in my current filter bubble, and I’d like my bubble to spend the next 10 years getting to the point where things that are current considered contentious are considered resolved. (And, to be clear, I’d like to get there by actually resolving things and people changing their minds, rather than people filtering themselves out)
I currently believe a keystone of this is that people should be striving to have belief structures that are “cruxy” – i.e. have reasons for believing the things you do, and when you encounter disagreement, attempt to resolve that disagreement so that we don’t have to hash it out over and over. (sort of a “hacker ethos” for beliefs. The hacker mentality is that problems are solvable, solutions are shareable, and no problem should have to be solved more that once).
Perhaps ironically, discussion that doesn’t try to seriously resolve disagreement feels like either politics or recreation, both of which are nice and all but not what I’m interested in LessWrong for.
(This doesn’t mean I expect disagreement to always get resolved anytime soon, just that long as it isn’t, you should have a nagging sense that your job isn’t done)
Introspection, Extrospection and Social Skills necessary for collaboration—This has some worldview baked into it that not everyone shares, and that I’m not currently sure about the framing of. I’m open to running threads that don’t assume that worldview (and certainly am up for participating in other people’s islands that don’t). But, threads that don’t assume this involve me running at a psychological deficit and will probably burn me out over time.
I expect the people I’m actively working with to be able to notice their own needs, notice the reasons for their beliefs, and be able to communicate about them in a collaborative (rather than confrontational) fashion. If you’re at a point where if something threatening to you happens, and you don’t feel like you have the social tools to deal with it non-confrontationally, I’d prefer you talk to me (or someone else you trust) about it one-on-one until you’ve gotten a handle on it.
I think my main confusion now is whether we are thinking of archipelago mainly as “fuzzy boundaries between individual authors’ preferences” or “explicitly cut-out regions built into the site” like subreddits. The former seems to be where the meta decisions are going, but the latter is what I instinctively anchored on when I heard the word archipelago.
My sense is that the first model will be unlikely to result in clear enough boundaries between islands for LW to noticeably separate. Few people (certainly not myself) have the energy and mental fortitude to tend a walled garden the way Scott does. Also, there’d be too many sets and combinations of norms and tastes to keep track of.
Meanwhile, if you end up building “subforums” or “subreddits” there really only deserve to be O(5) of them, and it will be possible to keep track of norms and values across them for authors and readers alike.
tl;dr: Moderation power is a safety blank eject button for most authors, and not an active tool we will use to actually build gardens. If this is the main meta change towards archipelago it is unlikely to work.
Gotcha. There’s a different phrase we’d tossed around a bit which was “Private Fiefdoms”, which I think has more of the connotations of what the currently implementation is pointed towards.
But, the longterm goal is (most likely – plans can change a bunch in the meantime) more like a genuine archipelago where people with shared conversational goals/norms have banded together. Subreddits or what-have-you. It’s just that for the immediate future, it’s unclear what sort of islands we might coalesce into, or who trusts who to run an island.
Some random bits of my worldview here:
I think it is necessary to have a leader or small trusted council run an island.
If an island is named after a topic, that means anyone else who wants to run a space around that topic but optimized around different norms has to fight over the namespace.
A lot of what I’m trying to do is sidestep fights over overton windows. A subreddit still creates a venue to fight over, if people have subtle or not-so-subtle differences of opinion on what’s good. A newly created subreddit might created a power vacuum for people to fight over. Starting with “users-fiefdoms” first lets people get a sense of who they trust, and who they might want to join forces with to start a council.
I think if there end up being a few dominant modes of discussion, it’ll be easier to express a given user’s space as “X Norms, but with this small diff”.
An object-level norm that I want to push for: being allowed to revive old threads. LW right now feels to me like a “travelling band/circus/horde” that flits about from one party to the next. Part of the problem is that the frontpage cycles so quickly, and part of it is that almost nobody feels comfortable commenting on something more than a week old.
I might even make the following feature request: allow people to “refresh” old threads at the cost of karma (say proportional to how old it is) and push it to the top of frontpage. This is something I would definitely like to use to periodically provoke discussion and elaborate on certain posts in the Sequences, for example.
I strongly agree with this, and I want to do a bunch more things to make people feel more comfortable reviving old threads. I actually quite like the idea of being able to pay karma to promote old content, and will discuss it with the rest of the team.
I agree with this concern, and this sort of thing is why I have previously suggested a “best posts of the last [some time period]” sidebar (or some such page element).
Also agreed, although my preferences about this are a bit jumbled in with my preferences with the Recent Comments section.
I’m pro “replace Recent Comments” with “Recent Discussion” – instead of listing comments in chronological order, list posts in chronological order of “most recently commented on” and cluster recent comments together. This has the side effect of making it easy to revive threads.
(There’s a bunch of plausible implementation detail variations of this)
(I am currently against this proposal as is, because I am worried that a frontpage without any content people can directly read will be much worse at getting people to engage with the site. The current recent comments section gives people the immediate feeling that something is happening on the site, even if they visit it for the first time, whereas the green comment icon only starts doing that after you’ve read a bunch of posts. However, the recent comments section does feel more like a band-aid than a real proper solution, and I am still looking for something that gets us the current benefits of the recent comments section, while also allowing old threads to be revived more effectively, and while showing new users something better than just the most recent comments)
(Not to hash out the entire discussion here, but to be clear, the version of this I’m most excited about is about as readable as the current comment section)
I think I object to this description of the island for several reasons, assuming it’s the same island I have in mind (probably you’re being tongue in cheek here but even so). The goal for me is not to have fun, although fun is a nice side effect and explicitly stifling the fun would make the thing work worse; the goal for me is to change people’s behavior (in ways they would endorse) by dealing with emotional blocks.
The target audience I have in mind is roughly people like me 2-4 years ago: interested in things like AI safety in some abstract sense, but with substantial motivational / emotional problems getting in the way of doing anything about it. It’s hard to meaningfully contribute to AI safety research if, for example, you have a fear of stating opinions that haven’t been vetted by an authority (this isn’t the problem I was running into, I made it up, but I think things like this are reasonably common), ultimately stemming from a sense of feeling generally unsafe socially. Fears like that can go really deep and be really resistant to even being noticed, let alone debugged (by which I mean, it took me over a year to notice that I had something like this and several years of trying out all sorts of things to make noticeable progress on it, and I don’t expect this to be particularly unusual).
I think it’s important for us to be open, individually and collectively, to trying pretty weird shit in the name of tackling bugs like this, and accordingly the norms I want on this island are some combination of 1) being pretty open to people writing things in the emotional / poetic / illegible direction, or in general being experimental and trying things out in writing, and 2) being pretty intolerant of people who try to shut down 1).
I think the island you’re describing (“Fuzzy System 1 Stuff”?) slightly overlaps with but is significantly different from the one I had in mind. Probably only a small subset of rationalist fiction/poetry will be specifically designed to solve these motivational/emotional blocks.
The island I’m referring to is motivated by the idea that humor and truth-detection are closely linked (see the word “wit”). There’s something really curious and right about bursting into laughter when you prove a theorem, and about the mythological narrative that the Fool is the only one who can speak the truth (Jordan Peterson says the Fool is the precursor to the Savior). This is why the state of comedy and satire is supposed to be a good metric of intellectual freedom.
I’m pretty excited about your island too. One similar piece of weird shit I’m currently exploring is lucid dreaming, which may have the same effect as circling/mythology for a different subset of people. E.g. I noticed that I have had two dreams in the last month about leading groups of friends through secret passageways at Harvard. I can only interpret this dream as about being extremely motivated by status signalling with elite privileges. Of course this is obvious in hindsight but was sobering and surprising at the time.
I think a number of the things I’ve written on Jung, for example, fit better into your island than the comedy one.
What have you written on Jung? Can you link to your top one or two posts?
I’ve really only touched on tiny pieces motivated by Jung, say Circumambulation and Solitaire Principle.
Thanks for posting this, I think this is a very useful conversation to have. Even though I have my doubts about the delete without trace option, I actually think that an off-topic section could be quite useful.
One annoyance I had on Less Wrong before was that a some people object to any discussion of hypotheticals that they see as unrealistic (in contrast, most philosophers and logicians see these as useful for testing whether you have proposed a theory that could in principle be perfectly correct). That’s a reasonable discussion to have—maybe our understanding of cases far removed from everyday reality are so limited that they don’t constitute a counter-example—but having that discussion each and every time was a major annoyance and definitely limited productive conversation. Ideally, I’d have one thread for everyone to have this discussion about hypotheticals and people would either comment on it directly or create a new post link to it in the comment. If someone had a comment about hypotheticals that was actually unique to that specific post, as opposed to being a general argument that belonged more in the hypothetical thread, I’d be happy to make an exception for that. Anyway, that’s just how I’d use the moderation feature.
But as for what islands I’d like to see, I think that many of the channels on the LW Slack would make good islands. However, they wouldn’t be good for the front page, so I don’t think that will happen until we get some kind of sub-group or sub-reddit feature.