Turning the Technical Crank
A few months ago, Vaniver wrote a really long post speculating about potential futures for Less Wrong, with a focus on the idea that the spread of the Less Wrong diaspora has left the site weak and fragmented. I wasn’t here for our high water mark, so I don’t really have an informed opinion on what has socially changed since then. But a number of complaints are technical, and as an IT person, I thought I had some useful things to say.
I argued at the time that many of the technical challenges of the diaspora were solved problems, and that the solution was NNTP—an ancient, yet still extant, discussion protocol. I am something of a crank on the subject and didn’t expect much of a reception. I was pleasantly surprised by the 18 karma it generated, and tried to write up a full post arguing the point.
I failed. I was trying to write a manifesto, didn’t really know how to do it right, and kept running into a vast inferential distance I couldn’t seem to cross. I’m a product of a prior age of the Internet, from before the http prefix assumed its imperial crown; I kept wanting to say things that I knew would make no sense to anyone who came of age this millennium. I got bogged down in irrelevant technical minutia about how to implement features X, Y, and Z. Eventually I decided I was attacking the wrong problem; I was thinking about ‘how do I promote NNTP’, when really I should have been going after ‘what would an ideal discussion platform look like and how does NNTP get us there, if it does?’
So I’m going to go after that first, and work on the inferential distance problem, and then I’m going to talk about NNTP, and see where that goes and what could be done better. I still believe it’s the closest thing to a good, available technological schelling point, but it’s going to take a lot of words to get there from here, and I might change my mind under persuasive argument. We’ll see.
Fortunately, this is Less Wrong, and sequences are a thing here. This is the first post in an intended sequence on mechanisms of discussion. I know it’s a bit off the beaten track of Less Wrong subject matter. I posit that it’s both relevant to our difficulties and probably more useful and/or interesting than most of what comes through these days. I just took the 2016 survey and it has a couple of sections on the effects of the diaspora, so I’m guessing it’s on topic for meta purposes if not for site-subject purposes.
Less Than Ideal Discussion
To solve a problem you must first define it. Looking at the LessWrong 2.0 post, I see the following technical problems, at a minimum; I’ll edit this with suggestions from comments.
Aggregation of posts. Our best authors have formed their own fiefdoms and their work is not terribly visible here. We currently have limited support for this via the sidebar, but that’s it.
Aggregation of comments. You can see diaspora authors in the sidebar, but you can’t comment from here.
Aggregation of community. This sounds like a social problem but it isn’t. You can start a new blog, but unless you plan on also going out of your way to market it then your chances of starting a discussion boil down to “hope it catches the attention of Yvain or someone else similarly prominent in the community.” Non-prominent individuals can theoretically post here; yet this is the place we are decrying as moribund.
Incomplete and poor curation. We currently do this via Promoted, badly, and via the diaspora sidebar, also badly.
Pitiful interface feature set. This is not so much a Less Wrong-specific problem as a 2010s-internet problem; people who inhabit SSC have probably seen me respond to feature complaints with “they had something that did that in the 90s, but nobody uses it.” (my own bugbear is searching for comments by author-plus-content).
Changes are hamstrung by the existing architecture, which gets you volunteer reactions like this one.
I see these meta-technical problems:
Expertise is scarce. Few people are in a position to technically improve the site, and those that are, have other demands on their time.
The Trivial Inconvenience Problem limits the scope of proposed changes to those that are not inconvenient to commenters or authors.
Getting cooperation from diaspora authors is a coordination problem. Are we better than average at handling those? I don’t know.
Slightly Less Horrible Discussion
“Solving” community maintenance is a hard problem, but to the extent that pieces of it can be solved technologically, the solution might include these ultra-high-level elements:
Centralized from the user perspective. A reader should be able to interact with the entire community in one place, and it should be recognizable as a community.
Decentralized from the author perspective. Diaspora authors seem to like having their own fiefdoms, and the social problem of “all the best posters went elsewhere” can’t be solved without their cooperation. Therefore any technical solution must allow for it.
Proper division of labor. Scott Alexander probably should not have to concern himself with user feature requests; that’s not his comparative advantage and I’d rather he spend his time inventing moral cosmologies. I suspect he would prefer the same. The same goes for Eliezer Yudkowski or any of our still-writing-elsewhere folks.
Really good moderation tools.
Easy entrance. New users should be able to join the discussion without a lot of hassle. Old authors that want to return should be able to do so and, preferably, bring their existing content with them.
Easy exit. Authors who don’t like the way the community is heading should be able to jump ship—and, crucially, bring their content with them to their new ship. Conveniently. This is essentially what has happened, except old content is hostage here.
Separate policy and mechanism within the site architecture. Let this one pass for now if you don’t know what it means; it’s the first big inferential hurdle I need to cross and I’ll be starting soon enough.
As with the previous, I’ll update this from the comments if necessary.
Getting There From Here
As I said at the start, I feel on firmer ground talking about technical issues than social ones. But I have to acknowledge one strong social opinion: I believe the greatest factor in Less Wrong’s decline is the departure of our best authors for personal blogs. Any plan for revitalization has to provide an improved substitute for a personal blog, because that’s where everyone seems to end up going. You need something that looks and behaves like a blog to the author or casual readers, but integrates seamlessly into a community discussion gateway.
I argue that this can be achieved. I argue that the technical challenges are solvable and the inherent coordination problem is also solvable, provided the people involved still have an interest in solving it.
And I argue that it can be done—and done better than what we have now—using technology that has existed since the ’90s.
I don’t argue that this actually will be achieved in anything like the way I think it ought to be. As mentioned up top, I am a crank, and I have no access whatsoever to anybody with any community pull. My odds of pushing through this agenda are basically nil. But we’re all about crazy thought experiments, right?
This topic is something I’ve wanted to write about for a long time. Since it’s not typical Less Wrong fare, I’ll take the karma on this post as a referendum on whether the community would like to see it here.
Assuming there’s interest, the sequence will look something like this (subject to reorganization as I go along, since I’m pulling this from some lengthy but horribly disorganized notes; in particular I might swap subsequences 2 and 3):
Technical Architecture
Your Web Browser Is Not Your Client
Specialized Protocols: or, NNTP and its Bastard Children
Moderation, Personal Gardens, and Public Parks
Content, Presentation, and the Division of Labor
The Proper Placement of User Features
Hard Things that are Suddenly Easy: or, what does client control gain us?
Your Web Browser Is Still Not Your Client (but you don’t need to know that)
Meta-Technical Conflicts (or, obstacles to adoption)
Never Bet Against Convenience
Conflicting Commenter, Author, and Admin Preferences
Lipstick on the Configuration Pig
Incremental Implementation and the Coordination Problem.
Lowering Barriers to Entry and Exit
Technical and Social Interoperability
Benefits and Drawbacks of Standards
Input Formats and Quoting Conventions
Faking Functionality
Why Reddit Makes Me Cry
What NNTP Can’t Do
Implementation of Nonstandard Features
Some desirable feature #1
Some desirable feature #2
...etc. This subsequence is only necessary if someone actually wants to try and do what I’m arguing for, which I think unlikely.
(Meta-meta: This post was written in Markdown, converted to HTML for posting using Pandoc, and took around four hours to write. I can often be found lurking on #lesswrong or #slatestarcodex on workday afternoons if anyone wants to discuss it, but I don’t promise to answer quickly because, well, workday)
[Edited to add: At +10/92% karma I figure continuing is probably worth it. After reading comments I’m going to try to slim it down a lot from the outline above, though. I still want to hit all those points but they probably don’t all need a full post’s space. Note that I’m not Scott or Eliezer, I write like I bleed, so what I do post will likely be spaced out]
- LW 2.0 Strategic Overview by 15 Sep 2017 3:00 UTC; 85 points) (
- The Web Browser is Not Your Client (But You Don’t Need To Know That) by 22 Apr 2016 0:12 UTC; 44 points) (
- 5 Apr 2016 13:52 UTC; 11 points) 's comment on Turning the Technical Crank by (
- 17 Dec 2016 22:27 UTC; 9 points) 's comment on Feature Wish List for LessWrong by (
- 6 Apr 2016 10:34 UTC; 1 point) 's comment on Turning the Technical Crank by (
- 29 Apr 2016 20:15 UTC; 0 points) 's comment on The Web Browser is Not Your Client (But You Don’t Need To Know That) by (
Whatever your solution ends up looking like, a key feature has to be “I can post a link on Facebook or whatever that people can click on and read in their web browser.” If you can’t be linked to it’s no good.
I (too?) am nostalgic for the good ol’ days of Usenet. I’m very very unconvinced that an NNTP-based system could realistically replace Less Wrong. I’m interested in what you have to say, but wonder whether there’s value in some kind of brief overview post along the lines of “Here’s the one-paragraph summary of why I think this is probably a good idea. Here are one-sentence summaries of the strongest five objections and why they don’t change my mind.”
(But maybe not; the effect might be to put off people who could have been persuaded with a gentler run-up.)
I do wonder whether you can really need twenty posts to make your case. Perhaps much of the material will be useful in other ways (e.g., to inform future attempts at community-building, collaboration, etc.)?
The perennial cry is “LessWrong needs more high-quality content!” And when such is offered you go “Eh, condense it into a tl;dr”?
Nope, that’s not what I intended to say. (My apologies for any lack of clarity.) Rather, I think that
20 posts of high-quality content on a single rather peripheral topic arriving in rapid succession would not necessarily be an improvement;
if there are going to be 20 such posts, a TL;DR as well might be helpful at the outset.
There’s not necessarily a one-to-one bullet-point-to-post correspondence in that list; I won’t know exactly how much space it takes to make each point until I’ve done it. It seems excessively long to me too, but it’s how my notes map out.
A summary post is more or less what I tried to start with and failed. It kept coming out in arguments that only ‘work’ for people that already know where I’m coming from, e.g. it assumed the superiority of specialized protocols and the Rule of Separation, neither of which means anything to nontechnical users or even most power users. Our base is tech savvy but it’s mostly post-2000 tech-savvy, I think.
[edit: I might add a bullet-point summary to the bottom of this post without justifications after I’ve had a chance to see the comments and which objections people actually raise]
I assumed you didn’t want people to raise technical objections to this post, and let you present your argument first. But if you want them now, here are some objections that gjm didn’t mention:
Our goal is to make something better than the existing LW software / UX. But we must also allow free linking in, to posts and comments, from ordinary blogs and other sites. These links will be the natural gateway for users new to the community, and lurkers. They must also have a UX at least as good as today’s LW, exposing the features of the new / non-web solution, or else this whole endeavor will be a regression and doomed to fail.
People won’t like a plaintext-oriented interface; they want rich text, inline images and tables, which in practice (in the world of NNTP) translates to HTML. But HTML is far too rich (and unsuitable for human editing). We need an equivalent to comments’ markdown support (or something better that would also be usable for posts). With NNTP, this would be client dependent, so at best it would vary by poster and at worst it would simply not be supported (or not out of the box).
Editing of posts and comments is a crucial feature which NNTP doesn’t provide.
Other features we need or are used to: RSS feed; tagging; user management and direct messaging without the trivial inconvenience of creating a new pseudonymous email account on a different site; server-based state (e.g. ‘unread’ message state in user inbox) for those with multiple client devices; …..
Any proposal that isn’t for a gradual change to the existing site will need to be run on a new site. The old lesswrong.com won’t be shut down until the new one has clearly succeeded. So you’ll need to directly compete with lesswrong.com to convince users to switch. You can make an LW → NNTP gateway, but not an NNTP → LW gateway; that is, lesswrong.com can’t automatically publish content (comments) posted via NNTP (or any other protocol, really). So during this period of competition, even if users cross-post, discussion threads will be separate. The new software will have to be clearly superior to convince LW users to switch, let alone diaspora blog authors.
I didn’t, but I was assuming people would anyway. I was actually hoping for higher-level objections like the problems I listed in the post, but, well. I’ll answer you anyway and maybe edit the post later. Most of these fall under 3.2 and 3.3 in the outline.
is actually the only part of the problem for which no off-the-shelf solution exists. The short version is that the site UX, instead of talking to a database directly on the backend, would talk to NNTP on the backend. Links to arbitrary posts (as a non-exhaustive example) could be as simple as https://newlesswrong/message-id. Top level posts could support some subject-generated shorthand, perhaps.
I actually do want a plaintext-oriented interface, but I know I’m in the minority. You’ve expressed the solution to the markup problem yourself, though: Markdown is already the effectively-default format on Usenet and all extant NNTP clients. In fact it’s the rise of lightweight markup in general and Markdown in particular that convinced me this could ever be more than a pipe dream. There are more powerful forms of lightweight markup that could be used; the key is that the input format must be readable as plaintext for interoperability between the web and native clients to be possible.
I believe that cancels and supersedes can be kludged to support something that looks like editing even if it isn’t. I may be wrong about this; in particular I’m not sure how supersedes interact with reply chains, because they’re unusable for that purpose on Usenet proper.
User management has an established solution, and DMing can be implemented as a LW mailbox (that only takes local messages) or a forwarding address, or both. RSS is probably easy. Server based state is easy for the ‘default’ website (really an in-browser client, covered in 1.1 and 1.7) but may be hard for native clients (but anyone using a native client presumably knows what they’re getting into). Tagging may be hard, I’m not sure. Karma is definitely hard, but may be unnecessary.
The short version of this is that, designed correctly, any author or site can adopt the network without being any worse off than they currently are; that is, cooperate-defect leaves you no worse off than defect-defect. The long version is...pretty much all of section 2, actually.
(ETA: I am answering this in moderate detail not to encourage technical back-and-forth but to demonstrate that I have thought this through)
I think this reduces to problem 5. Namely, you can’t replace lesswrong.com with a completely new NNTP-backed site as your opening move.
Ordinary Markdown is insufficient because it doesn’t do inline images and tables, which are sometimes important for posts. There are more powerful alternatives, and some markdown supersets. But I’ll bet they’re not supported by NNTP clients. At best you’ll find one client (or client plugin) that supports what you want. But forcing everyone to use the same client negates the point of NNTP, and isn’t plausible anyway because we need clients for different platforms including mobile.
This definitely requires proof. In addition to the issue with reply threads, NNTP clients expect to be able to cache articles. Propagation of supersede/replace is at best delayed and at worst out-of-order, and will vary between clients. This is IMO a critical issue, a major LW feature.
I’m surprised that you say “anyone using a native client presumably knows what they’re getting into”. You’re proposing NNTP as a superior solution, but also saying anyone who actually chooses to use NNTP will have a hard time and/or fewer features. And I think this applies to other, more important features and not just server based state.
Adoption, for site owners, will at minimum require a significant time/money investment, and at least trivial inconveniences to web-based readers due to unavoidable minor undesirable UX differences. So moving to the new platform needs to make people better off, not just no worse than before. The same applies to posters/readers: they won’t switch over without a compelling reason, nor should we expect them to.
I completely forgot about karma, but it’s so important that I’m promoting it to a new item. I think karma is pretty good, and some alternatives may be better, but nothing at all is much worse.
Meta: replying in a list-based format is inconvenient and I tentatively suggest making a separate reply for each significant list item.
Agreed that it’s inconvenient. Rather than separate it I’ll cut it down. 2, 4, and 6 all collapse to “client feature sets differ”; this is a meta-feature, not a meta-bug. The downside of client control is that not everybody sees the same thing. The upside of client control is client competition, which has similar benefits to market competition.
Solving adoption is the point of section 2 and is too long to describe here. Note that, as mentioned, I do not actually expect this to be adopted. The world isn’t that kind.
3 is a legitimate problem and will be addressed, but it’s the sort of thing where I need to spin up INN and see how it actually behaves when presented with edit-style supersedes. If there is a weak point in my “this is possible” argument, this is it.
That misses my point. Some features, like voting, can’t be implemented as clientside features, because clients would need to communicate about them (and establish consensus).
Contemporary HTML. On the other hand the original HTML (of the early 90s vintage) is a simple page description language designed to be hand-coded.
A well-defined limited subset of HTML would probably be easier to implement than some superset of markdown.
A subset of HTML is still unsuited to human editing. Even more so than full HTML, because it doesn’t have the justification of being a complex and extendable syntax. A superset of markdown would be much more usable, for people writing plaintext, than a subset of HTML. Especially if, as now, the majority of posts and comments require more or less only regular markdown and no superset features like tables.
Asciidoc might be an alternative when more power is needed. I haven’t used it, but ESR once said that it does markdown’s job better than Markdown itself.
HTML is wholly unsuited to human editing or even reading. I blame it for ruining email. Well, that and top-posting.
People from the 90s would disagree and rich text editors can output whatever. Of course markdown is better for specifying minor formatting while you write the content—that’s what it was explicitly designed for. However the advantage of HMTL is ubiquity.
The majority of posts and comments use only links, bold/italics, and an occasional bulleted list. Inline images are culturally disapproved of and tables are rare. At this level pretty much anything would work.
I don’t get your point here. HTML’s ubiquity is important for display, not for editing. Markdown is converted to HTML for display. As a user I prefer writing in markdown to writing in HTML. Don’t you?
That doesn’t mean anything would be equally as comfortable as anything else.
We are talking about the acceptable format for messages as they are processed and stored by the system, right? Ease of input is a separate issue and your editor can and should allow you to write in whatever way you find most comfortable.
The format for server-side processing and storage should be the input format unless there is specific cause not to use it (3.2). Conversion to display formats should be done client-side and as late as possible. HTML, as Dan says, is a display format.
(this distinction exists even for server-side clients, e.g. web clients)
When you say “input” here you mean “what the client sends to the server”. When DanArmak is talking about input, he is talking about the user experience, ease of writing and editing. These are obviously not the same thing.
It is, now. When designed, it wasn’t.
If all users input in the same format, then it should be the storage format too. Rendering to HTML can be done when it’s actually needed. (Plus or minus caching/prerendering for performance.)
If users can choose to input in different formats, and we can’t convert between these formats (e.g. from HTML to markdown), then I think it would be easiest to just store whatever the user originally input. The main reason for original-format storage is editing, and users normally edit only their own content, so they shouldn’t mind the format it’s in.
If I write in markdown, but my editor has to send HTML to the server, then it has to implement an HTML-to-markdown conversion for editing, which raises all kinds of issues (like supporting the HTML output of an old version of the editor, never mind of different editors) and trying to solve them just doesn’t seem worth the bother. What does adopting HTML as a storage format get you?
The suggestion in your edit seems possibly reasonable. For what it’s worth, I think my main objections (not all of them at all well thought through) are:
Existing NNTP clients are clunky.
Google Groups, the obvious choice for those who want something in their web browser (which I bet will be almost everyone), is horrible. (And, also, is a presumably-not-profitable Google product, and therefore liable to be discontinued at any time.)
Users are used to the existing LW interface and are likely to be put off by anything too different.
NNTP works well (technically) for ephemeral discussions; not so well for things intended to last. (Especially if done via Google Groups, which has thrown away lots of old Usenet posts and I think can’t be relied on not to throw away other things later.) So e.g. any further “Sequences” seem like they would not be well served by this medium.
I disagree that existing NNTP clients are clunky. If anything, I find existing web forum software clunky. SSC is my go-to example because it’s where I ended up in the diaspora fallout. It gets on the order of seventy comments a day and is incredibly unwieldly to navigate. And it’s a single site. In Usenet days, with native clients, I routinely perused groups with an order of magnitude more discussion and had zero trouble navigating—and the same interface worked for all groups. It is this form of convenience I would like to revive. It cannot be done in a browser—but it doesn’t need to be. The end goal is for the browser to be the non-trivial-inconvenience-provoking default, but for native clients to be an option for people who want or need the kind of power they provide.
It’s relevant to the GG and ephemerality objections that while I’m suggesting NNTP, I’m not going to suggest Usenet itself; but rather, a private network, containing only LW-related groups, with infinite retention and programmably dumpable content. i.e. there is no risk of losing anything. Sequences may be an issue, but because of curation limitations, not retention. (also, yes, GG is a godawful sack of shit and Google has atrociously mismanaged their possession of a cultural treasure trove)
I actually think the existing LW/reddit-style interface has the least-horrible UX of web-based discussion software out there. I wouldn’t object to keeping it looking more-or-less the way it does; my problem is with mechanism more than policy.
I completely agree that comments on SSC and other blogs are incredibly annoying. I would participate far more in those comment threads if they used something like LW/reddit. I would happily pay money to make it so, but there’s no cause I can donate to that would replace all Wordpress blogs in the world with reddit, or even with something halfway decent like Disqus.
I also think pre-Web discussion systems did some things better than LW/reddit. My own experience is with 90s email, not usenet, but I think they were fairly similar. On the other hand, there are important innovations like editing, voting, and moderation, which classic email and usenet lack. So just going back to one of those systems isn’t a solution in itself. And while user features should be located at the client when possible, these particular features can’t work unless all clients communicate about them, at which point they become protocol extensions—and everyone is forced or at least strongly encouraged to use on the few clients that support your community’s favorite extensions, removing much of the value of a client-neutral protocol.
Of course this deals with the Google Groups objection simply by making it impossible to use Google Groups :-).
That is a feature, not a bug. :-P
This is a subject that strongly matters to me. I too would love to see a return to non-proprietary, open communication protocols, open source software and decentralized hosting—everywhere on the Internet, not just on Less Wrong. This is one of the few capital-C Causes in my area of professional competence that I would happily donate a lot of labor and/or money to, if only I knew of a way to promote it. But I don’t, and I don’t know of anyone who does.
To argue that the problem can be solved in the LW microcosm would need to either take advantage of LW-specific community features, or explicitly not solve the general problem (e.g. by not scaling, or by admitting that some things would always remain Web-only and non-interoperable). If either one is the case, please mention that explicitly.
Like gjm, I immediately want to jump the inferential distance to the usual unsolved problems. (E.g., how do you handle ‘graceful degradation’ for people who encounter a necessarily web/http link for the first time, so the community can grow and people with regular blogs can link to it?)
It might help if you add explicit disclaimers saying “please don’t bring up issue X, that’s for a future post”. Are there things you don’t want to talk about before a certain point? Is your sequence planned out enough (and short enough in practice) that I should refrain from anticipating certain issues, even in separate posts?
I fear that my comment(s) might appear negative, focusing on problems that I don’t know how to solve before you even posted about them. I very much want this conversation (and the wider LW 2.0 one) to be constructive! If you think there’s a better way for me to engage with it, please don’t hesitate to tell me so. And thank you for taking the time to advocate a solution to a problem I deeply care about.
ETA: also, I would very much enjoy myself writing posts on subjects like “Your Web Browser Is Not Your Client” (or as it’s sometimes known, The Web Is Not The Net), “The Proper Placement of User Features (is at the clientside)” aka “separation of protocols from implementation”, and so on. I just didn’t think it was on-topic for LW. But if you make it on-topic, then I might just join in.
Talking about optimizing a widely-used system seems very on-topic for Less Wrong. At any rate, it doesn’t seem any more off-topic than things like fibromyalgia. I probably couldn’t contribute anything of value, but I’d be fascinated by those hypothetical posts.
For many reasons, I agree with this.
The only LW-specific community feature my proposal takes advantage of is our cultural applause of “I cooperate in Prisoner’s Dilemmas.”
It scales in the specific sense that it allows for incoming users and authors to be added incrementally. It fails to scale in the sense that it can never be an open system in the same sense that Usenet is an open system. However, that is no worse than our existing situation.
Graceful degredation is a hard problem. Please don’t bring up graceful degredation, that’s for a future post. :-P
I am waffling on whether to encourage people to anticipate issues. On the one hand, it’s helpful for me to know what I need to address along the way. On the other, I really don’t want the comment threads bogged down by material that only makes sense to our technical contingent.
I love it that you jumped to the correct interpretation of The Proper Placement of User Features.
New users, who encounter inbound links on other sites, aren’t yet invested enough in the community to join an NNTP network. This isn’t a PD type problem, since “cooperation” here requires investment that only pays off later (if at all): spending time choosing, installing, and learning to use a new client application, usually without understanding why a non-web solution is being used, let along that particular solution.
This is one I’m glad someone asked about because I thought it was clear and it wasn’t: I am not advocating a native-client-only approach. I am aiming for something that uses NNTP as mechanism on the back end, so invested users have better options than “bug the 1.5 guys in a position to fix things to fix things” and interoperability between LW and the diaspora is easy rather than hard.
Inbound users should not have to know or care that NNTP is involved (1.7), for exactly the reason you mention: to the average user, the web is the internet (1.1, first because I expect most people here don’t realize it either), and explaining to them why they are wrong is not helpful. I want the answer to the eventual, inevitable question “how do I do X” (where X is not some fundamental operation like “read” or “post”) to be “install this app today and be happy,” as opposed to “bug person Y for six months and hope they take the time to implement X.”
The ‘cooperate’ action I need has to come from diaspora authors, not inbound new users; they are the ones that would need to be convinced to join the network and use blog software that supports it. Making that cooperate action as close to costless as possible (I would settle for “no harder than setting up your own blog”) is a hard problem—but, I believe, solvable.
Thanks for making that clear. I can’t foresee what your argument for NNTP on the backend is going to be, so I’m interested in reading your further posts on it.
I appreciate both your encouragement and your criticism.
I don’t. And I’m sufficiently politically naive to think that broadcasting that is a good idea.
Maybe the iterated ones, if the discount factor’s right. Maybe the real thing too, sometimes. Depends on the opponent. Depends how I’m feeling about counterfactuals. Whaddya got?
I think any proposal based on actual NNTP is probably doomed.
I think any proposal that asks the user to use a client that isn’t The Web is doomed (but it looks like you are addressing that in 1.7.)
BUT, I think the notion of redesigning this system around something that is morally just like NNTP is a hugely interesting and not-totally-crazy one; AND even if you completely fail, I think there will be hugely valuable ideas in this sequence for people like me who also think about this kind of thing.
So please write this sequence!
What do you think about the following alternative approach?
Expose all LW features via a convenient, well designed, documented/stable API. (I don’t know how much work that would be, but let’s ignore that for a moment.)
You’re now free to write and use a non-web client for LW. You can add whatever features you want that make sense as clientside-only features. Perhaps you can add support to an existing email/usenet/… client instead of writing one from scratch.
You can also add support for other sites, such as wordpress or tumblr blogs. Naturally, the client will have to enable for each site only the ‘shared’ features that site supports, like voting on LW. Then you can use that client to improve your personal experience on all sites, to the degree that’s possible without changing each site’s server software.
In fact, you will naturally create (or adopt, or adapt) a middleman protocol which clients will talk with per-server plugins. If you’re sufficiently masochistic, you can even use NNTP here.
If many people like your client and start using it, we can at that much later date consider issues like encouraging more newcomers to use it, or otherwise making it the ‘main’ client for some sites.
This would be very reminiscent of the multi-protocol, interoperable, and open-standard IM scene of the 90s and early 2000s, before the big providers (Google, Yahoo, Facebook, et al) all killed off their Jabber support and became closed gardens. And if such a protocol or client ever comes close to succeeding on a world-wide scale, I expect it would be killed in the same manner. In practice, of course, it would fail much sooner: the HTTP traffic of a typical website isn’t meant to be an API and can’t be easily reverse engineered to behave like one, never mind stability guarantees. But if we only want it for a few friendly sites, then it’s not technologically problematic.
This would be a second-best approach. The main benefit that the use of NNTP has over such an approach is the ability to leverage the huge existing library of NNTP server and client software. The only from-scratch development required would be a forumesque in-browser client—which might already exist, though I am aware of no good ones.
What you describe would be very similar to designing an NNTP 2, a goal that I find laudable but that I really do think is socially (not technically) impossible. If it were possible, I wouldn’t recommend implementing it on top of HTTP. “Cram the round peg of semantic information over http no matter how badly it fits that square hole” is my major beef with the entire direction of software development over the last ten years.
The comparison to Jabber is apt, and I hate the death of jabber for reasons very similar to my hate for the death of nntp. Mechanism should not be a closed garden. Individual communities, sure; and, as you say, what I want it for here is a few friendly sites. But mechanism, never.
So why not write a bridge between the LW API (serverside) and NNTP (clientside), so you can use it with your favorite NNTP software?
Obviously it would have to be a complex, stateful bridge, probably with own copy of the content and so on. But it’s not a priori clear to me that this would require much more work than your original proposal. And it has the great advantage of being unilateral: you don’t need to convince anyone else (like the lesswrong.com admins) to do anything.
At this point, is LW itself anything more than a database with a schema? You’re pushing pretty much everything into the client.
Which would be a good thing in and of itself, as a matter of software design, even if no-one planned on ever writing new clients. But the ability to write new clients (or change the existing one for private use, if it is open source) is much more important.
Yes, but we’re leaving the territory of “let’s make LW work well” and entering the territory of “let’s design a new way for humans to collaborate”.
The two aren’t contradictory or unrelated :-) More to the point, the idea of multiple interoperable clients is hardly a “new way for humans to collaborate”, or even just “new”.
At this level of abstraction what is it that you want to change? LW does have an API expressed through HTTP and we have multiple interoperable clients called “browsers”.
Like I said, I have no idea how far LW is from having a good API already. (Not all HTTP traffic is a proper API, but maybe it’s a well written site.) If an API already exists, so much the better; move on to step 2.
By “client” I mean the actual client-side content and code of the website, or a client application for non-browser implementations. Not the browser (or equivalently, the OS).
Error, if I understand correctly, seems to want the ability to modify the UX and add new clientside features, with different users (like Error) choosing different features, and without requiring the server to be changed, everyone to agree, and the people who can actually change the server to spend time on it. If this is indeed Error’s main motivation, I suggested that it might be more easily (almost unilaterally) achieved by writing a new client. The new client might be web-based or not; that’s unimportant to the argument.
Note that despite my cynicism about this as a way to save/improve/fix LessWrong specifically, I would absolutely love a solid reddit-to-nntp gateway, that allowed me to use trn or whatever to read and respond to posts and comments, with similar threading and better state management.
I’d use it on LW and a bunch of other similar places.
Off topic question: are there many sites that use reddit software? I’ve only encountered LW so far. Many things that might have been independent sites are created as subreddits these days.
I, for one, am very interested in this. I don’t see this working because people are resisting change even when presented with clearly better alternatives, and also legacy community.
But, as you say, this is LessWrong, it’s worth a try.
I also loved usenet! It fell apart when all the spammers and trolls and idiots turned up. (I think the binary groups are a distraction, plenty of usenet servers just didn’t carry them. )
Those are just words for people whose opinions you’d rather not read, so we need some sort of moderation system.
Trusted moderators don’t have the time or energy to do that, and don’t scale, so we need some sort of group voting system.
Reddit was brilliant for a while, then Hacker News, then Less Wrong.
All three seem to have gone downhill in different ways. What can we learn from these three? (It’s actually possible that Hacker News is still excellent, but it no longer posts the sort of things that interest me often enough that I ever look anymore.)
Stack Overflow (and related sites) seem to have stayed consistently excellent for a long time now. They’re also laudably open. But Stack Overflow has a consistent problem with interesting questions getting deleted by fascist moderators, and it seems to only be good for Questions and Answers. It’s not a place to pontificate about fibromyalgia, for instance.
What can we learn from Stack Overflow? Can we make something like that, for people we like to post essays and comment on them, without driving our best away?
That’s not a bug, that’s a feature. SO was designed and restricted for a Q&A format; it deliberately omits features like comment trees. Its creators believe that it succeeded because it was restricted, since a more specific problem is easier to solve. They’ve gone on to try and solve the more general discussion forum “problem” with Discourse, but it’s still very lean on features—because it’s explicitly designed for mobile touch interfaces and for reading over writing.
Personally, I’m not convinced that SO—as a community—couldn’t have succeeded just as well, or better, with quite different software features. But that’s what they believe.
Stack Overflow is different from the other websites you mentioned, because it is, to some degree, timeless.
I suspect that the importance of time when posting on news sites contributes to the deterioration. Some people spend more time on the website, some people spend less. The people who spend more time get a bonus in the system.
Problem is that “spending a lot of time debating online” can correlate with some undesired traits, such as “doesn’t do research”, “writes without thinking”, “doesn’t read the whole article before commenting”, etc. If these traits are turned up to eleven, of course those people get banned. But within the acceptable range, those on the bad side of the range get an advantage, and the ‘Overton window’ will gradually shift in their favor.
1) I loved Usenet prior to Eternal September, and used it through much of the 90s as well. It’s not coming back.
I’m part of another group which tried replacing their disfunctional mailing lists with NNTP, and probably a dozen of us used it for a month or two before we realized that nobody else was coming and went back to the main group.
2) Running code trumps theoretical arguments. Don’t write a series of posts, set up your system and see if it works.
Re 2: running code that would take a lot of effort to integrate into an existing system, and no one else is interested in it, least of all the site admins, doesn’t trump anything.
Running new code elsewhere (not on lesswrong.com), and convincing everyone to switch over, would be a sufficient demonstration. I think writing posts to try to convince people or sound them out before implementing such a thing is a good approach.
A failed attempt is more decisive and teaches us more than a series of blog posts.
Mostly, I’m saying “don’t waste time on more posts”. If that means dropping the idea entirely and doing something more useful (perhaps posting on a more interesting topic), ok. If that means creating something and posting about how to use it, ok.
I think condensing the argument for it down from 20 posts to, say, 3 total, would be wise, but eliminating the ‘why should we even think about this’ phase and skipping to the ‘make it’ phase seems too much.
You’ve been anchored.
Condensing it down to a comment in an existing thread rather than a top-level post would be wise.
What do you mean by ‘anchored’ in this context?
It was half-joking; I don’t actually know how serious you are or how much thought you’ve put into the recommended number of posts. What I meant to imply was that the mention of “20” as a starting point made you pick a higher number as a counter-offer than you would pick if you’d come at the question cleanly.
reference: Wikipedia Anchoring article.
I meant, he listed 16 articles he wanted to write, and I didn’t remember the exact number but it was around 20, and I thought that was excessive. I figured that 3 would do.
So yeah, I was anchored on what he said he’d do, as a representation of what I was recommending he change from doing. Seems fair.
I think a long post sized explanation might be warranted, and should certainly be allowed, i.e. a comment-sized defense of the idea should not be expected. Even if such a presentation is possible, it would have to assume no inferential distance and so be less effective for at least some readers.
The inferential distance is significant; looking through this thread, the impression I get is that the people who have actually used NNTP in the past do not need to be convinced. Or rather, they need to be convinced only that it is possible, not that it is desirable. Dagon above, for example.
I have used NNTP in the past and am not yet convinced.
Well, I at least have used NNTP (and also skimmed the RFC as a refresher just now) and still need to be convinced that it’s better than the status quo.
Fair enough. :-)
ETA: Also, it is relevant that there is an RFC for you to skim, and that it gets read by many, many people not necessarily associated with us.
Decisive, yes. Teaches more, only if anyone is paying attention.
Ok, maybe “teaches us no less” :)
From the position of author, the important difference between posting an article here and posting an article on my personal webpage is the control over the discussion.
Posting here is convenient: the whole website is already set up and maintained, I just need to write the text. My article will immediately get many readers, and it will be approximately the kind of readers I want. Even the moderation by crowd is provided for free.
On the other hand, the cost of the convenience is my freedom to make different choices. If I have opinions on the website functionality or design, it’s not my choice. I have to think whether my topic is appropriate for the website; while on my own blog I can post anything. If I disagree with the moderation, too bad, I am just one among many voters. On my own blog I can make the ultimate decisions, block the users I don’t like, and keep the debate nice according to my criteria of niceness.
One of the interesting things about NNTP’s structure is that the moderator and the host don’t need to be the same entity or even use the same software. The same goes for UX elements. It would be entirely possible to run something-that-looks-like-a-blog on your own site, have it use hypothetical-lesswrong-hosted NNTP for hosting its content (buying you native-client support for users who want it), and still have ultimate control over who can post what. I’ll be describing how that works at some point.
It would rely on goodwill from the LW hosts, of course; but the worst they could do is stop hosting you—and they could not hold your content hostage as long as someone, somewhere, has kept a local cache of it. You could even self-host and still interoperate with the site, because the system was designed to be decentralized even though it doesn’t have to be used that way.
My understanding is that another difference is even more important to the author, and that is control over client-side user experience (UX). A neutral protocol or API would allow for different and/or more customizable clients, and readers (posters, commenters) could modify their UX and do things like aggregate content from different sites without imposing on other users or on the site maintainers.
That suddenly reminds me of Urbit. Wouldn’t it be funny… X-)
Is this a good summary of your argument?
NNTP was a great solution to a lot of the problems caused by mailing lists. The main ones being:
content duplication—mailing lists are bad because everyone gets their own copy of each article.
reduced content accessibility—mailing lists suck because you miss out on great articles if they were sent before you were part of the mailing list.
We are facing similar problems now. A lot of people have their own sites where they host their own content. We either miss out on great content if we don’t trawl through a ton of different sites or we try to make lesswrong a central source for content and face problems with:
content duplication—through needing to cross post content (essentially duplicating it)
harder content accessibility—the alternative to cross posting is providing a link, but this is an annoying solution that can be jarring as you need to go to an entirely different site to access the content you want.
NNTP would solve the problems we have now in a similar way to how it solved the problems with mailing lists. That is, it would provide a central repository for content and a way to access this content.
I am currently thinking that the best way to think about the last point is that it means that we should set up a Web API similar to the Blogger Web API. Discussing NNTP, at least to me, is making the solution appear a lot more complicated than it needs to be. Although, I don’t know much about NNTP, so I could be overlooking something very important and am interested about what your future posts will explore.
With a Less Wrong Web API, websites could be created that act like views in a database. They would show only the content from a particular group or author. This content would, of course, be styled according to the style rules on the website.
These websites could be free, dns name and web development costs aside, using services like github pages. This is because there should be no need for a back-end as the content and user information is all hosted on Less Wrong. You post, retrieve content and vote using the API. It should also be fairly easy to create more complicated websites that could aggregate and show posts based on user preferences or even to create mobile applications.
The solution to reading all that content is RSS. The solution to, basically, cross-linking comments haven’t been devised yet, I think.
So, that’s Reddit with more freedom to set up custom CSS for subreddits? Or there are deeper differences?
As far as I see it, there are 2 basic classes of solutions.
The first type of solution is something like reddit or Facebook’s newsfeed which involves two concepts: linkposts which are links to or cross posts of outside content and normal posts which are hosted by the site itself. Making use of RSS or ATOM can automate the link posts.
The second type of solution is something like the Blogger API with extended functionality to allow you to access any content that has been posted using the API. Other things it would include would be, for example, the ability the list top pages based on some ranking system.
In the first type of solution, LessWrong.com is a hub that provides links to or copies of outside content. Smooth integration of the comments and content hosted outside of this site would, I think, be hard to do. Searching of the linked content and handling permissions for it nicely would be difficult as well.
In the second type of solution LessWrong.com is just another site in the LessWrong Sphere. The functionality of all the sites in this sphere would be driven by the API. You post and retrieve using the API which means that all posts and comments regardless of their origination sites can be available globally. Creating a prototype for this type of solution shouldn’t be too hard either which is good.
The deeper difference is the elimination of linkposts. All content posted using the API can be retrieved using the API. It is not linked to. It is pulled from the one source using the API.
The closest existing solutions are off-site comment management systems like Disqus. But they’re proprietary comment storage providers, not a neutral API. And each such provider has its own model of what comments are and you can’t change it to e.g. add karma if it doesn’t do what you want.
Disqus is just a SaaS provider for a commenting subsystem. The trick is to integrate comments for/from multiple websites into something whole.
Solving such integration and interoperability problems is what standards are for. At some point the Internet decided it didn’t feel like using a standard protocol for discussion anymore, which is why it’s even a problem in the first place.
(http is not a discussion protocol. Not that I think you believe it is, just preempting the obvious objection)
That’s an interesting point. What are the reasons NNTP and Usenet got essentially discarded? Are some of these reasons good ones?
Usenet is just one example of a much bigger trend of the last twenty years: the Net—standardized protocols with multiple interoperable open-source clients and servers, and services being offered either for money or freely—being replaced with the Web—proprietary services locking in your data, letting you talk only to other people who use that same service, forbidding client software modifications, and being ad-supported.
Instant messaging with multi-protocol clients and some open protocols was replaced by many tens of incompatible services, from Google Talk to Whatsapp. Software telephony (VOIP) and videoconferencing, which had some initial success with free services (Jingle, the SIP standards) was replaced by the likes of Skype. Group chat (IRC) has been mostly displaced by services like Slack.
There are many stories like these, and many more examples I could give for each story. The common theme isn’t that the open, interoperable solution used to rule these markets—they didn’t always. It’s that they used to exist, and now they almost never do.
Explaining why this happened is hard. There are various theories but I don’t know if any of them is generally accepted as the single main cause. Maybe there are a lot of things all pushing in the same direction. Here are a few hypotheses:
Open protocols don’t have a corporate owner, so they don’t have a company investing a lot of money in getting people to use them, so they lose out. For-profits don’t invest in open protocol-based services because people can’t be convinced to pay for any service, so the only business model around is ad-based web clients. And an ad-based service can’t allow an open protocol, because if I write an open source client for it, it won’t show the service provider’s ads. (Usenet service used to cost money, sometimes as part of your ISP package.)
The killer feature of any communications network is who you can talk to. With proprietary networks this means who else made the same choice as you; this naturally leads to winner takes all situations, and the winner is incentivized to remain proprietary so it can’t be challenged. Interoperable solutions can’t compete because the proprietary providers will be able to talk to interoperable users and their own proprietary users, but not vice versa.
In the 80s and early 90s, when the first versions of crucial protocols like email were created, the Net was small and populated by smart technical people who cared about each others’ welfare and designed good protocols for everyone’s benefit—and were capable of identifying and choosing good programs to use. Today, the Web has three to four orders of magnitude more users (and a similar increase in programmers), and they aren’t any more technologically savvy, intelligent or altruistic than the general population. Somewhere along the way, better-marketed solutions started reliably winning out over solutions with superior technology, features and UX. Today, objective product quality and market success may be completely uncorrelated.
There are other possibilities, too, which I don’t have the time to note right now. This is late in the night for me, so I apologize if this comment is a bit incoherent.
The Web, of course, if nothing but a standardized protocol with multiple interoperable open-source clients and servers, and services being offered either for money or freely. I am not sure why would you want a lot of different protocols.
The net’s big thing is that it’s dumb and all the intelligence is at the endpoints (compare to the telephone network). The web keeps that vital feature.
That’s not a feature of the web as opposed to the ’net. That’s business practices and they are indifferent to what your underlying protocol is. For example you mention VOIP and that’s not the “web”.
Never do? Really? I think you’re overreaching in a major way. Nothing happened to the two biggies—HTTP and email. There are incompatible chat networks? So what, big deal...
Sigh. HTTP? An ad-based service would prefer a welded-shut client, but in practice the great majority of ads are displayed in browsers which are perfectly capable of using ad-blockers. Somehow Google survives.
No, not really. Here: people like money. Also: people are willing to invest money (which can be converted into time and effort) if they think it will make them more money. TANSTAAFL and all that...
This is like asking why, before HTTP, we needed different protocols for email and IRC and usenet, when we already had standardized TCP underneath. HTTP is an agnostic communication protocol like TCP, not an application protocol like email.
The application-level service exposed by modern websites is very rarely—and never unintentionally or ‘by default’ - a standardized (i.e. documented) protocol. You can’t realistically write a new client for Facebook, and even if you did, it would break every other week as Facebook changed their site.
I use the example of Facebook advisedly. They expose a limited API, which deliberately doesn’t include all the bits they don’t want you to use (like Messenger), and is further restricted by TOS which explicitly forbids clients that would replace a major part of Facebook itself.
That’s true. But another vital feature of the net is that most traffic runs over standardized, open protocols.
Imagine a world where nothing was standardized above the IP layer, or even merely nothing about UDP, TCP and ICMP. No DNS, email, NFS, SSH, LDAP, none of the literally thousands of open protocols that make the Net as we know it work. Just proprietary applications, each of which can only talk to itself. That’s the world of the web applications.
(Not web content, which is a good concept, with hyperlinks and so forth, but dynamic web applications like facebook or gmail.)
I mentioned VOIP exactly because I was talking about a more general process, of which the Web—or rather modern web apps—is only one example.
The business practice of ad-driven revenue cares about your underlying protocol. It requires restricting the user’s control over their experience—similarly to DRM—because few users would willingly choose to see ads if there was a simple switch in the client software to turn them off. And that’s what would happen with an open protocol with competing open source clients.
Email is pretty much the only survivor (despite inroads by webmail services). That’s why I said “almost” never do. And HTTP isn’t an application protocol. Can you think of any example other than email?
Google survives because the great majority of people don’t use ad blockers. Smaller sites don’t always survive and many of them are now installing ad blocker blockers. Many people have been predicting the implosion of a supposed ad revenue bubble for many years now; I don’t have an opinion on the subject, but it clearly hasn’t happened yet.
That doesn’t explain the shift over time from business models where users paid for service, to ad-supported revenue. On the other hand, if you can explain that shift, then it predicts that ad-supported services will eschew open protocols.
Huh? HTTP is certainly an application protocol: you have a web client talking to a web server. The application delivers web pages to the client. It is by no means an “agnostic” protocol. You can, of course, use it to deliver binary blobs, but so can email.
The thing is, because the web ate everything, we’re just moving one meta level up. You can argue that HTTP is supplanting TCP/IP and a browser is supplanting OS. We’re building layers upon layers matryoshka-style. But that’s a bigger and a different discussion than talking about interoperability. HTTP is still an open protocol with open-source implementations available at both ends.
You are very persistently ignoring reality. The great majority of ads are delivered in browsers which are NOT restricting the “user’s control over their experience” and which are freely available as “competing open source clients”.
Sure. FTP for example.
Why is that a problem? If they can’t survive they shouldn’t.
The before-the-web internet did not have a business model where users paid for service. It pretty much had no business model at all.
HTTP is used for many things, many of them unrelated to the Web. Due to its popularity, a great many things have been built on top of it.
The point I was making is this: when a server exposes an HTTP API, that API is the protocol, and HTTP is a transport just like TCP underneath it and TLS on top of it. The equivalent of a protocol like SMTP on top of TCP is a documented API on top of HTTP. The use of different terms confused this conversation.
My point is, you can’t interoperate with Facebook or Gmail or Reddit just by implementing HTTP; you need to implement an API or “protocol” to talk to them. And if they don’t have one—either deliberately, or because their HTTP traffic just wasn’t designed for interoperability—then there is no open “protocol”.
The great majority of web ads are actually displayed, keeping revenue flowing. They’re not removed by adblockers. Even if everyone installed adblockers, I believe ads would win the ensuing arms race. It’s much easier to scramble software, mix the ads with the rest of the web site so they can’t be removed without breaking the whole site, than to write a program to unscramble all websites. (In contrast to the problem of unscrambling a specific given website, which is pretty easy—as shown by the history of software copy protection.)
That is only true because the server provides the client software, i.e. the website’s client components. That the browsers are open source is as irrelevant as that the client OS is. The actual application that’s trying to enforce showing the ads is the website that runs in the browser.
The amount of use FTP sees today is completely negligible compared to its market share twenty years ago. But I agree, file servers (and p2p file sharing) are good examples of an area where most protocols are documented and interoperable. (Although in the case of SMB/CIFS, they were documented only after twenty years of hard struggle.)
I didn’t say it was a problem…
That’s not true. There were many services being offered for money. Usenet was one. Email was another, before the advent of ad-supported webmail.
This from here seems pretty accurate for Usenet:
For NNTP for LessWrong, I would think that we have to also take into account that people want to control how their content is displayed/styled. Their own separate blogs easily allow this.
Not just about how it’s displayed/styled. People want control over what kinds of comments get attached to their writing.
I think this is the key driver of the move from open systems to closed: control. The web has succeeded because it clearly defines ownership of a site, and the owner can limit content however they like.
My opinion? Convenience. It’s more convenient for the user to not have to configure a reader, and it’s more convenient for the developer of the forum to not conform to a standard. (edit: I would add ‘mobility’, but that wasn’t an issue until long after the transition)
And its more convenient for the owner’s monetization to not have an easy way to clone their content. Or view it without ads. What Dan said elsewhere about all the major IM players ditching XMPP applies.
[Edited to add: This isn’t even just an NNTP thing. Everything has been absorbed by HTTP these days. Users forgot that the web was not the net, and somewhere along the line developers did too.]
I find it difficult to believe that mere convenience, even amplified with the network effect, would have such a drastic result. As you say, HTTP ate everything. What allowed it to do that?
It’s more appropriate to say that the Web ate everything, and HTTP was dragged along with it. There are well known reasons why the Web almost always wins out, as long as the browsers of the day are technologically capable of doing what you need. (E.g. we used to need Flash and Java applets, but once we no longer did, we got rid of them.)
Even when you’re building a pure service or API, it has to be HTTP or else web clients won’t be able to access it. And once you’ve built an HTTP service, valid reasons to also build a non-HTTP equivalent are rare: high performance or efficiency or full duplex semantics. These are rarely needed.
Finally, there’s a huge pool of coders specializing in web technologies.
HTTP eating everything isn’t so bad. It makes everything much slower than raw TCP, and it forces the horribly broken TLS certificate authority model, but it also has a lot of advantages for many applications. The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
I’ve been asking for them and got nothing but some mumbling about convenience. Why did the Web win out in 90s? Do you think it was a good thing or a bad thing?
If you specify that your client is a browser, well, duh. That is not always the case, though.
But you’ve been laying this problem at the feet of the web/HTTP victory. So HTTP is not the problem?
I think it was more in the 00s, but either way, here are some reasons:
The core feature of the Web is the hyperlink. Even the most proprietary web application can allow linking to pieces of its content—and benefits a lot from it. And it can link out, too. I can link to a Facebook post, even if I can’t embed it in my own website. But I can’t link to an email message. And if I include an outgoing link in my email, clicking it will open the web browser, a different application, which is inconvenient.
People often need to use non-personal computers: at work, in internet kiosks, etc. They can’t install new client software on them, but there’s always a browser. So a website is available to more people, at more times and places.
Pieces of web content can be embedded in other websites no matter how they are written. This is a kind of technology that never really existed with desktop applications. If I need to display an ad, or preview a link’s target, or use a third party widget, or embed a news story, I can just put it in an iframe and it will work, and I don’t care how it’s implemented. This is a huge difference from the desktop world: just try to embed a QT widget in a GTK application.
A well written website works on all browsers. At worst, it might look bad, but would still be usable. Client apps need to be written separately for different platforms—Windows, Mac, Linux, and the mobile platforms—and then compiled separately for some architectures or OS versions and tested on each. Cross-platform UI frameworks like QT have shortcomings compared to native toolkits, they don’t support all the platforms there are or look ugly on some of them, and still require require separate compilation and testing for each target.
There’s a much bigger market supply of web developers than of a desktop UI developers. This is a cause, rather than an effect, of the Web’s success: the most popular desktop OS is Windows, and it doesn’t come with an IDE or compiler toolchain; until recent years, Windows native code IDEs/compilers (and some UI toolkits) cost a lot of money; but all you need for Web development is a text editor and a browser. So a lot of people first learned to program by writing web pages with Javascript.
Desktop client apps require update management, which takes a lot of skill, time and money and annoys users. Web apps always have matching server/client versions, no-one runs past versions, and you can easily revert changes or run A/B tests.
Lots of people can’t or won’t install new unfamiliar desktop software. Many techies have taught their families never to download executables from the Web. Many people are just very limited in their computer using abilities and are unable to install software reliably. But everyone is able and willing to click on a hyperlink.
Even when a user can install new software, it’s a trivial inconvenience, which can be psychologically significant. Web applications looking for mass adoption benefit greatly from being easy to try: many more users will spend thirty seconds evaluating whether they want to use a new service if they don’t have to wait two minutes to install software to try it out.
Depends on what the alternative would have been, I guess. It’s easy to imagine something better—a better Web, even—but that doesn’t mean if we would have gotten that something better if the Web had failed.
HTTP isn’t a problem. Or rather, it’s not this problem. I may grumble about people using HTTP where it isn’t the technologically correct solution, but that’s not really important and is unrelated in any case.
I don’t think the problem of proprietary services is entirely due to the success of the web. It was encouraged by it, but I don’t think this was the main reason. And I don’t really have a good reason for thinking there’s any simple model for why things turned out this way.
Just a guess: having to install a special client? The browser is everywhere (it comes with the operating system), so you can use web pages on your own computer, at school, at work, at neighbor’s computer, at web cafe, etc. If you have to install your own client, outside of your own computer, you are often not allowed to do it. Also, many people just don’t know how to install programs.
And when most people use browsers, most debates will be there, so the rest will follow.
That doesn’t explain why people abandoned Usenet. They had the clients installed, they just stopped using them.
The amount of people using the Internet and the Web has been increasing geometrically for more than two decades. New users joined new services, perhaps for the reasons I gave in my other comment. Soon enough the existing usenet users were greatly outnumbered, so they went to where the content and the other commenters were.
Yes, the network effect. But is that all?
It’s not an explanation for why new users didn’t join existing services like Usenet, just for why even the people already using Usenet eventually left.
The e-mail client that came pre-installed with Windows 95 and several later Windowses also included newsgroup functionality.
This is similar to my proposal.
Go for it. If we listened to cranks more, we could have finished that Tower of Babel.
One conceptual difference between netnews (Usenet, NNTP, etc.) and current bloggyweb systems (LW, Reddit, Wordpress, Livejournal, etc.) is that bloggyweb systems have two kinds of messages, whereas netnews has only one.
The two kinds of messages in the bloggyweb are often called “posts” and “comments”. A post is a top-level item. A comment is always attached to a single post. Some bloggyweb systems allow a tree structure of comments descending from a post. But comments and posts are fundamentally different, not only visually but also in the database schema behind them. They are also socially different: the ability to create a post is often restricted, whereas any damnfool can spam the comments. Comments are inferior to posts in every way: they are less searchable, they often can’t be independently linked-to, they are presented as subordinate to posts in the user interface, etc.
In the netnews system, there is only one kind of message. Messages can contain metadata that refers to other messages — particularly by saying “this message is a reply to that one.” If you want to start a new thread, you just create a message that is not a reply to any other message. If you want to continue a thread, you reply to a message in that thread. But a “thread” is not a thing — it’s just a chain of messages linked to each other by metadata.
There are also other major differences. In the bloggyweb system, topical tagging is an afterthought; you find messages by following sites such as lesswrong.com, or forums such as reddit.com/r/rationality. In the netnews system, topical tagging is how anyone ever finds any messages. Topical tags in netnews are called “newsgroups”. The user interface makes it seem like messages are inside newsgroups, but really a newsgroup is just a bit of indexing for tags, along with some glued-on rules for things like moderation.
Easy entrance is how September happened, both on LessWrong and on Usenet.
My personal bias here is that I see little hope for most of the application level network protocols built in the 80s and 90s, but have high hope for future federated protocols. Urbit in particular since a certain subtribe of the LW diaspora will already be moving there as soon as it’s ready.
I’m pretty sure the problem isn’t primarily technical—it’s not that Usenet mechanisms or protocols stopped working, it’s that the interesting conversations moved elsewhere. Sure, a woeful security model (trivial forgery, unauthenticated moderation headers) helped it along, but the fundamental community tension (it’s not possible to be inclusive and high quality for very long) is what killed it.
LessWrong is actually pretty good in terms of keeping the noise down. There are a few trolls, and a fair number of not-well-thought-out comments (case in point: what you’re reading now), but they’re not enough to drown out quality if it were still here. Where we’re failing is in attracting interesting deep thoughts from people willing to expand and discuss those thoughts here.
My analysis saw the fundamental problem as the yearning for consensus. What was signal? What was noise? Who was trolling? Designers of forum software go wrong when they believe that these are good, one place questions with actual one place answers. The software is designed in the hope that its operation will yield these answers.
My suggestion, Outer Circle got discussed on Hacker News under the title Saving forums from themselves with shared hierarchical white lists and I managed to flesh out the ideas a little.
Then my frail health got even worse and I never did anything more :-(
Not sure about details, but the general idea seems right to me. My thoughts on the topic are usually something like: “How is it possible that in real life we can filter the good stuff much easier than online? I guess because in real life we can use strategies X, Y, Z, but there are not digital equivalents of them in online systems. We cannot use our usual strategies online, because the corresponding button is simply not there.” In real life:
different people see different content, because they use different sources of content
people show interesting stuff to their friends
people have different personas for different friends
sometimes a friend sees more than one persona; sometimes we hide a persona from some people
sometimes we agree to talk only about a specific topic for a while
Okay, I probably missed a few important things. But this is already difficult to do on many websites.
For example, I miss the “persona” feature on Facebook. Having multiple accounts is discouraged. There is an option to post something that only a selected group of friends can read, but that is not what I want. Sometimes I want to post an article that anyone can read, but which only appears by default only on walls of some of my friends.
The most obvious example: different languages. There is no point to spam my English-speaking friends’ walls with comments written in Slovak. On the other hand, if they decide to view them and use google translate, why not? It’s not like I want to keep something secret; I just predict that with high enough probablity they won’t care, so I don’t want to bother them. Also, I want to keep those comments accessible to Slovak-speaking people who are not in my contacts.
If I understand it correctly, Facebook only gives me two options: public (which will push the message on everyone’s wall) or private (which will hide the message from everyone except a few hand-picked people), and neither is what I want. This would be easy if I could just have two personas, one for each language, and anyone in my contact list would have an option to follow just one of them, if they want.
Similarly, I could have personas for “private life”, “politics”, “rationality”. My relatives probably want to see the photos of my baby, but don’t care about my opinions on Bayes Theorem. For other contacts, it may be the other way round. Sometimes the personas intersect (a post could be about politics and in Slovak language; or perhaps a political comment on local affairs that are uninteresting for a foreigner). Sometimes they don’t apply (a photo of a baby is language-independed).
So perhaps these “personas” could be just some predefined flags, applied to any content I make, in any combination. And my friends could specify that they are interested in some personas and uninterested in others. Access to some personas could be limited.
...but this is obviously far from the complete proposal.
Also, the whole interface must be very simple, especially for people who don’t give a fuck about the sophisticated features. There must always be a “default” setting that the Average Joe can use; otherwise the Average Joe will complain about the difficult software and will not use it, which hurts the value of the whole network.
That is an excellent and thought-provoking essay, and a novel approach.
...I actually don’t have more to say about it, but I thought you’d like to know that someone read it.
And I second this. Short, readable and intriguing.
Yep. That is THE problem that LW has to solve.
Notice how it doesn’t care about which protocols are used to shuffle which bits back and forth.
Protocols have an impact on discussion, and discussion has an impact on what articles people write.
Otherwise, Eliezer could have posted his Sequences on 4chan.
Not protocols. High-level structure of a BBS/mailing list/forum/Twitter/etc. Protocols (in the technical sense) provide some constraints on what kind of structures can be built on their basis, but there are enough degrees of freedom to construct very different things on top of the same protocols.
So the difference between LW and 4chan is protocols..? X-)
And now that I actually write it down and compare it to previous online communities (including a few mixed online/offline) I’ve been part of and loved, and which have universally followed the same pattern of growth, overgrowth, loss of some driving valuable members without obvious replacement, slow decay into irrelevance (to me; at least 2 of them are going strong, just with a different feel than when I was involved)), I’m pretty pessimistic.
I’m going to put some effort into being OK with LW as it is, enjoying the parts I enjoy and being willing to follow those parts I’m missing to their new homes.
This fits my own prior experience of the life cycle of a community—but when my previous community failed, a fragment of it broke off and rebuilt itself in a few form. That fragment still exists as a coherent tribe more than a decade later, and I still love it even if I disagree with certain, uh, technical decisions surrounding the splintering process.
So it’s not impossible.
Oh, indeed—fragments or even whole (slightly altered) communities live on. Two of my prior identity-tied groups are still meeting and going strong, they’re just not producing original research or even super-deep discussions on their topics. I still have fond feelings toward them, but I don’t participate enough to consider them part of my identity.
This is primarily a reminder to myself that this is okay. I can enjoy LW for what it is rather than lamenting what it was.
Can’t we just add a new ‘link’ post type to the current LW? Links and local posts would both have comment threads (here on LW), the only difference is the title of the linked post would link to an outside website/resource.
I’ve often griped about how the web X.0 is still miles behind usenet readers and even mailing list software of the 90s for forum discussions.
I saw some talk about the problem of requiring installation of an NNTP client.
Are there no reasonably sized javascript libraries that can be loaded as an in browser nntp client?
As for the Diaspora, couldn’t we just link/insert the blog posts of diaspora authors and discuss?
In-browser javascript can’t open TCP connections, so it can’t talk to real NNTP servers. It can only use websockets, aka tcp-over-http.
If you also happen to be running an HTTP server, you can use a websocket-to-tcp bridge like websockify.
Why exactly do you want an in-browser nntp client?
A number of comments expressed that getting people to install nntp clients was probably a non-starter. A browser client is all you’re going to get.
A javascript client seemed like a solution to that.
It is a non-starter, but there are ways to get the equivalent of a client in a web browser without using javascript to do it.
So what’s the best solution you have for the problem?
As someone with no knowledge of NNTP, I’m in favor of this sequence. As far as I’m concerned, much looks like on-topic craft/community material.
Think I’ll throw this in here.
The summary is an image which doesn’t play nice with LW’s fixed layout.
If the problem is that our best authors went elsewhere, would it not be a good idea for fans to take their best writing and re-post it here for them? I mean, if they’d actually prefer that not to happen, then ok. But are we sure about that?
What were their stated reasons for leaving? What were their real reasons?
Negativity in the discussion was mentioned. Not sure how important this is compared with other reasons.
Also, some people post both LW-type content and non-LW-type content. The latter does not belong to LW, so they create a separate blog. When the blog attracts its own community of readers, they may prefer to also post the LW-type content here, especially when the boundaries are not clear. (Some of them do repost the LW-type content here afterwards.)
In my opinion, the essence of the problem is that people instinctively play status games all the time. Even when they say that would prefer to do something else instead. It is hard to abandon the game, when even “saying that you would prefer to stop playing the game” can be used as a successful move within the game. Actually, denying that you are playing the game is almost a requirement in most situations; and accusing other people that they are playing the game is an attack move within the game. The game goes on automatically; whatever you do, you get or lose a few points, and other people see it. If you say “I am not playing the game”, but other people see you winning points, and they also want a few points for themselves.
And then, we have the instinct that status is connected with various things, especially with the ability to hurt other people and defend yourself successfully from being hurt. Oh, we are civilized people, so in most situations we avoid the worst forms of violence, but in every situation there is a permissible range: maybe only verbal attacks, maybe only passive aggressive behavior, but some of us are very good at using what we can. Seeing that someone gained too many points, without the ability to defend themselves and attack their enemies, provokes an attack. Not necessarily from someone who wants to replace the target, but simply from someone who feels that the difference of points between them and the target has become disproportionally large compared with their own estimate of how it should be.
How it looks from outside (among civilized people who wouldn’t admit playing the game) is illustrated here. Essentially, whenever you do something that is “too good” (something that brings you much more points than you “should have” according to your perceived ability to attack and defend yourself), many people will feel the urge to criticize you and your work, to alleviate the difference. From inside, I guess they will either convince themselves that the work is actually not good, or imagine some dangerous things you are totally going to do with your newly gained points (and see themselves as heroes who prevented this danger), or simply deny that they are attacking you.
This can be very exhausting to a person who wants to focus on creating good content, but doesn’t want to spend their time defending themselves from attacks. The usual reaction is that the person stops producing the good content, and the status balance is maintained. Which is quite bad for us, who want to consume the good content.
Another option is to retreat to a fortress, where the defense is much easier. Such as Facebook, where you can block the attackers in a few seconds, and they usually won’t create another account only to bother you (and even if they do, you can still set your messages visible to only your friends). If you are willing to solve the related technical problems, you can use your own blog.
So, the question is: can we do anything to prevent good authors from having to retreat to their own fortresses (or not writing / not publishing anymore) after they gain “too much” points for doing what we want them to do? What kind of platform would achieve that?
There is a standard solution, and most people call it “censorship”. You create a place where the authors can publish, and where all attacks are removed. Preferably by a third-party moderator, so the authors don’t even see them, and don’t have to waste their own time deleting them.
I can imagine how most people would react to this proposal. No, we can’t remove all negative feedback; we need to have a way how to tell genuinely bad authors that their work honestly sucks! Otherwise the stupidity will prevail! Sure… but the whole problem is that we are running on a corrupted hardware, so when the situation comes and our status-regulation emotion kicks in, we will start believing that the author is genuinely bad, the work genuinely sucks, and there is a very real and very urgent danger of genuinely horrible things happening unless the author is provided negative feedback as strongly as possible. :(
(“Oh no, Eliezer has an opinion on quantum physics that only a few experts agree with, but other experts disagree! And he believes that Bayes’ Theorem is super important, and the Bayes’ Theorem really is important, but isn’t as much imporant as he believes! And he once deleted Roko’s Basilisk and provided a totally unsatisfying PR explanation! And he asks people to send him money! And he has multiple girlfriends! This is totally a cult, worse than scientology! They are going to spread wrong interpretations of quantum physics and then they will commit mass suicide! Someone think of the children! Don’t read the Sequences! Don’t read HPMoR! Tell everyone, and warn them about the danger! Write an article on RationalWiki, and Wikipedia, and your local news, and contact all skeptical organizations you know, and post on Facebook and Reddit! Someone stop this dangerous guy from having too much status!”)
The proposal of “censorship” is value-neutral. There are authors who should be attacked; there are authors who shouldn’t be; the proposed mechanism protects both equally. Making a mechanism that protects that and only that which should be protected is a FAI-complete problem. At some moment a human judgement has to be applied. At that moment, you should expect the known psychological forces to manifest.
Another option is to remove debates completely; then you avoid the accusations of censorship, but you also lose the potentially good comments. Sure, the people will comment on a different website, but that’s okay—such comments aren’t linked to the criticized article as strongly as the comments directly below the article would be. (And you cannot prevent comments on a third-party website anyway.) Publishing a book is one way to do this; no one can write their comment into all copies of your book.
Yet another option is to make attacking costly: for example, you would be allowed to publish a critique of an article, but that critique itself would have to be a well-written article (preferably explaining and supporting their own position, not merely saying “X is wrong”, so that they are now equally exposed to an attack) and have to be accepted by editors. Of course the editors are going to be accused of partiality; that’s inevitable. (Replace the editors by a popular vote, then we need someone to decide who is an eligible voter, and we still have the status-regulation emotion urging people to upvote a critique that doesn’t fulfill the criteria but is well-deserved anyway.)
One serious, business answer is medium.com
Here is a look at what they are trying to do. Sample:
Could you describe how specifically the commenting works on medium.com? Because that seems to me like an important part where you just can’t make everyone happy, because some people want mutually contradictory things (such as “to filter unwanted comments” vs “not to be filtered”).
Commenting is actually one of the most interesting parts of Medium. It’s surprisingly similar to a combination of your “removing debates” and “making attacking costly”—you can reply to a post on Medium, and your reply is itself a post on your own Medium, with a metadata tag linking it to the post you’re replying to. People will generally not see your reply underneath the original post, but they will see an ‘other replies’ button they can click which will reveal it. But people can recommend your post; if your post is recommended by (1) the original post author, (2) Medium staff (I think?), or (3) someone I follow, then I will automatically see it under the original post like a ‘comment’, above the ‘show other replies’ button.
Wow, I’m impressed! This is pretty close to how I imagined it, and it also seems simple enough for everyone to understand.
Essentially, by default you only see content recommended by someone you care about (i.e. in long term you care about the people you follow; and in short term you care about the person whose article you are reading right now). So people cannot insert themselves into debates forcefully.
I’m trying to imagine how Facebook would look like if they switched to this system (using the existing “like” button as the sign of approval). So when you post something on your wall, the comments you “liked” are displayed to all readers; the comments to didn’t like are displayed only to friends of the person who posted them, and you are not allowed to remove any comment.
Sounds reasonable, assuming there is a visible difference between “the comments I didn’t approve because I don’t want to approve them” (e.g. the “hide” button), and “the comments I haven’t approved because I haven’t seen them yet”.
The only possible form of “spamming” here is to annoy someone by posting many replies to their articles, and even then you are only annoying them privately. (There should be a way to block a user, that is “auto-hide” all their replies, so the only possible way of “spamming” would be posting many replies with many sockpuppets. This would take the usual attacker much more time than the attacked person.)
Maybe the disadvantage is that it kills the “linear debate of trivial comments”; the type of discussion where everyone only types a line or two, which best resembles how people chat, but maybe that’s good. People who want to chat without writing an article-length reply might miss this feature.
So I guess my perfect system would be a combination of the Medium way, plus old-style linear discussion below the article, where all replies are invisible until approved by the author (optionally, the author could switch it to “auto-approve” with possibility to delete anything afterwards). Or, to make it more unified, every reply would start as a comment below the article, but you would have the checkbox “also show this reply on my homepage as an article”. All approved replies would be displayed below the article, but replies longer than three lines (that includes full articles) would be shortened until you click to expand them.
I don’t play there so I don’t know—but it’s an open website, you can go take a look any time...
“Only a few” are as committed to it as Eliezer is, but many many more consider it at least somewhat plausible.
I think the word you’re looking for is “moderation”.
It’s one of those flexible words: I keep the discussion polite; you moderate; he censors.
They are usually called “irregular verbs” :-)
If I remember right, the most recent survey asked those exact questions. So we may well find out.
Facebook.
Yes, it’s quite unfortunate, but that is what the masses have voted for :-/
I disagree with the implications that the masses (i.e. at least a majority of web users) have voted for facebook (i.e. actively chose it when an alternative with the same featureset was available) and that, even if this is true, it will not or cannot be changed.
I also note that the diaspora is not primarily located on facebook. Wordpress-like blogs and tumblr seem to be two strong focuses.
Finally, I don’t have a strong sense that “the masses” of whom it might be said that they actually prefer Facebook content to independent websites, are typical Rationalist Diaspora members, or even typical potential new recruits. Do you?
I love to hate Facebook. Facebook posters are pretty much my notion of “the outgroup of Good Web Users and Blog Authors”. But what’s the evidence for what you say?
And where is EY posting nowadays?
How many people besides EY are posting on facebook? I’m not saying nobody does, but it seems to me to be a minority. This may be just a sampling bias, because I dislike facebook, or because ‘affiliated rationalist diaspora blog’ lists don’t include Facebook posters. Do you have quantitative data?
About a billion and a half people?
I’m not saying that the “rationalist tribe” has migrated to Facebook. It hasn’t. The original quote was “many of the technical challenges of the diaspora were solved problems” and Facebook does indeed solve many diaspora problems—for example, dispersed extended families and/or clans find Facebook a very useful tool to keep in touch and coordinate things.
This partially goes to the same point of avoiding overreach—devising a better way of uniting a diaspora is a much harder task than making LW better.
At least one of us misunderstood the other, but it doesn’t seem worth the time to figure out why and where. We agree that the rationalist diaspora/tribe hasn’t mostly migrated to Facebook.
If you make LW sufficiently better, it may unite the diaspora behind it. If it doesn’t, then is it really worth our while to make LW better, at least with proposals of huge changes like this one?