The solution to, basically, cross-linking comments haven’t been devised yet, I think.
The closest existing solutions are off-site comment management systems like Disqus. But they’re proprietary comment storage providers, not a neutral API. And each such provider has its own model of what comments are and you can’t change it to e.g. add karma if it doesn’t do what you want.
Solving such integration and interoperability problems is what standards are for. At some point the Internet decided it didn’t feel like using a standard protocol for discussion anymore, which is why it’s even a problem in the first place.
(http is not a discussion protocol. Not that I think you believe it is, just preempting the obvious objection)
Usenet is just one example of a much bigger trend of the last twenty years: the Net—standardized protocols with multiple interoperable open-source clients and servers, and services being offered either for money or freely—being replaced with the Web—proprietary services locking in your data, letting you talk only to other people who use that same service, forbidding client software modifications, and being ad-supported.
Instant messaging with multi-protocol clients and some open protocols was replaced by many tens of incompatible services, from Google Talk to Whatsapp. Software telephony (VOIP) and videoconferencing, which had some initial success with free services (Jingle, the SIP standards) was replaced by the likes of Skype. Group chat (IRC) has been mostly displaced by services like Slack.
There are many stories like these, and many more examples I could give for each story. The common theme isn’t that the open, interoperable solution used to rule these markets—they didn’t always. It’s that they used to exist, and now they almost never do.
Explaining why this happened is hard. There are various theories but I don’t know if any of them is generally accepted as the single main cause. Maybe there are a lot of things all pushing in the same direction. Here are a few hypotheses:
Open protocols don’t have a corporate owner, so they don’t have a company investing a lot of money in getting people to use them, so they lose out. For-profits don’t invest in open protocol-based services because people can’t be convinced to pay for any service, so the only business model around is ad-based web clients. And an ad-based service can’t allow an open protocol, because if I write an open source client for it, it won’t show the service provider’s ads. (Usenet service used to cost money, sometimes as part of your ISP package.)
The killer feature of any communications network is who you can talk to. With proprietary networks this means who else made the same choice as you; this naturally leads to winner takes all situations, and the winner is incentivized to remain proprietary so it can’t be challenged. Interoperable solutions can’t compete because the proprietary providers will be able to talk to interoperable users and their own proprietary users, but not vice versa.
In the 80s and early 90s, when the first versions of crucial protocols like email were created, the Net was small and populated by smart technical people who cared about each others’ welfare and designed good protocols for everyone’s benefit—and were capable of identifying and choosing good programs to use. Today, the Web has three to four orders of magnitude more users (and a similar increase in programmers), and they aren’t any more technologically savvy, intelligent or altruistic than the general population. Somewhere along the way, better-marketed solutions started reliably winning out over solutions with superior technology, features and UX. Today, objective product quality and market success may be completely uncorrelated.
There are other possibilities, too, which I don’t have the time to note right now. This is late in the night for me, so I apologize if this comment is a bit incoherent.
the Net—standardized protocols with multiple interoperable open-source clients and servers, and services being offered either for money or freely—being replaced with the Web
The Web, of course, if nothing but a standardized protocol with multiple interoperable open-source clients and servers, and services being offered either for money or freely. I am not sure why would you want a lot of different protocols.
The net’s big thing is that it’s dumb and all the intelligence is at the endpoints (compare to the telephone network). The web keeps that vital feature.
proprietary services locking in your data, letting you talk only to other people who use that same service, forbidding client software modifications, and being ad-supported.
That’s not a feature of the web as opposed to the ’net. That’s business practices and they are indifferent to what your underlying protocol is. For example you mention VOIP and that’s not the “web”.
and now they almost never do.
Never do? Really? I think you’re overreaching in a major way. Nothing happened to the two biggies—HTTP and email. There are incompatible chat networks? So what, big deal...
And an ad-based service can’t allow an open protocol
Sigh. HTTP? An ad-based service would prefer a welded-shut client, but in practice the great majority of ads are displayed in browsers which are perfectly capable of using ad-blockers. Somehow Google survives.
Explaining why this happened is hard.
No, not really. Here: people like money. Also: people are willing to invest money (which can be converted into time and effort) if they think it will make them more money. TANSTAAFL and all that...
The Web, of course, if nothing but a standardized protocol with multiple interoperable open-source clients and servers, and services being offered either for money or freely. I am not sure why would you want a lot of different protocols.
This is like asking why, before HTTP, we needed different protocols for email and IRC and usenet, when we already had standardized TCP underneath. HTTP is an agnostic communication protocol like TCP, not an application protocol like email.
The application-level service exposed by modern websites is very rarely—and never unintentionally or ‘by default’ - a standardized (i.e. documented) protocol. You can’t realistically write a new client for Facebook, and even if you did, it would break every other week as Facebook changed their site.
I use the example of Facebook advisedly. They expose a limited API, which deliberately doesn’t include all the bits they don’t want you to use (like Messenger), and is further restricted by TOS which explicitly forbids clients that would replace a major part of Facebook itself.
The net’s big thing is that it’s dumb and all the intelligence is at the endpoints (compare to the telephone network). The web keeps that vital feature.
That’s true. But another vital feature of the net is that most traffic runs over standardized, open protocols.
Imagine a world where nothing was standardized above the IP layer, or even merely nothing about UDP, TCP and ICMP. No DNS, email, NFS, SSH, LDAP, none of the literally thousands of open protocols that make the Net as we know it work. Just proprietary applications, each of which can only talk to itself. That’s the world of the web applications.
(Not web content, which is a good concept, with hyperlinks and so forth, but dynamic web applications like facebook or gmail.)
That’s not a feature of the web as opposed to the ’net. That’s business practices and they are indifferent to what your underlying protocol is. For example you mention VOIP and that’s not the “web”.
I mentioned VOIP exactly because I was talking about a more general process, of which the Web—or rather modern web apps—is only one example.
The business practice of ad-driven revenue cares about your underlying protocol. It requires restricting the user’s control over their experience—similarly to DRM—because few users would willingly choose to see ads if there was a simple switch in the client software to turn them off. And that’s what would happen with an open protocol with competing open source clients.
Never do? Really? I think you’re overreaching in a major way. Nothing happened to the two biggies—HTTP and email. There are incompatible chat networks? So what, big deal...
Email is pretty much the only survivor (despite inroads by webmail services). That’s why I said “almost” never do. And HTTP isn’t an application protocol. Can you think of any example other than email?
Sigh. HTTP? An ad-based service would prefer a welded-shut client, but in practice the great majority of ads are displayed in browsers which are perfectly capable of using ad-blockers. Somehow Google survives.
Google survives because the great majority of people don’t use ad blockers. Smaller sites don’t always survive and many of them are now installing ad blocker blockers. Many people have been predicting the implosion of a supposed ad revenue bubble for many years now; I don’t have an opinion on the subject, but it clearly hasn’t happened yet.
people like money. Also: people are willing to invest money (which can be converted into time and effort) if they think it will make them more money. TANSTAAFL and all that...
That doesn’t explain the shift over time from business models where users paid for service, to ad-supported revenue. On the other hand, if you can explain that shift, then it predicts that ad-supported services will eschew open protocols.
HTTP is an agnostic communication protocol like TCP, not an application protocol like email.
Huh? HTTP is certainly an application protocol: you have a web client talking to a web server. The application delivers web pages to the client. It is by no means an “agnostic” protocol. You can, of course, use it to deliver binary blobs, but so can email.
The thing is, because the web ate everything, we’re just moving one meta level up. You can argue that HTTP is supplanting TCP/IP and a browser is supplanting OS. We’re building layers upon layers matryoshka-style. But that’s a bigger and a different discussion than talking about interoperability. HTTP is still an open protocol with open-source implementations available at both ends.
It requires restricting the user’s control over their experience
You are very persistently ignoring reality. The great majority of ads are delivered in browsers which are NOT restricting the “user’s control over their experience” and which are freely available as “competing open source clients”.
Can you think of any example other than email?
Sure. FTP for example.
Smaller sites don’t always survive
Why is that a problem? If they can’t survive they shouldn’t.
That doesn’t explain the shift over time from business models where users paid for service
The before-the-web internet did not have a business model where users paid for service. It pretty much had no business model at all.
Huh? HTTP is certainly an application protocol: you have a web client talking to a web server. The application delivers web pages to the client. It is by no mean an “agnostic” protocol. You can, of course, use it to deliver binary blobs, but so can email.
HTTP is used for many things, many of them unrelated to the Web. Due to its popularity, a great many things have been built on top of it.
The point I was making is this: when a server exposes an HTTP API, that API is the protocol, and HTTP is a transport just like TCP underneath it and TLS on top of it. The equivalent of a protocol like SMTP on top of TCP is a documented API on top of HTTP. The use of different terms confused this conversation.
But that’s a bigger and a different discussion than talking about interoperability. HTTP is still an open protocol with open-source implementations available at both ends.
My point is, you can’t interoperate with Facebook or Gmail or Reddit just by implementing HTTP; you need to implement an API or “protocol” to talk to them. And if they don’t have one—either deliberately, or because their HTTP traffic just wasn’t designed for interoperability—then there is no open “protocol”.
The great majority of ads are delivered in browsers which are NOT restricting the “user’s control over their experience” and which are freely available as “competing open source clients”.
The great majority of web ads are actually displayed, keeping revenue flowing. They’re not removed by adblockers. Even if everyone installed adblockers, I believe ads would win the ensuing arms race. It’s much easier to scramble software, mix the ads with the rest of the web site so they can’t be removed without breaking the whole site, than to write a program to unscramble all websites. (In contrast to the problem of unscrambling a specific given website, which is pretty easy—as shown by the history of software copy protection.)
That is only true because the server provides the client software, i.e. the website’s client components. That the browsers are open source is as irrelevant as that the client OS is. The actual application that’s trying to enforce showing the ads is the website that runs in the browser.
FTP for example.
The amount of use FTP sees today is completely negligible compared to its market share twenty years ago. But I agree, file servers (and p2p file sharing) are good examples of an area where most protocols are documented and interoperable. (Although in the case of SMB/CIFS, they were documented only after twenty years of hard struggle.)
Why is that a problem? If they can’t survive they shouldn’t.
I didn’t say it was a problem…
The before-the-web internet did not have a business model where users paid for service. It pretty much had no business model at all.
That’s not true. There were many services being offered for money. Usenet was one. Email was another, before the advent of ad-supported webmail.
Binary groups being a great big cost sink would be the main thing.
The store and forward protocol required quite some disk space at the time.
The network relied on “control” messages to create/delete groups automatically (as opposed to manual subscription), which due to the lack of authentication/encryption in the protocol, were very easy to spoof. A gpg-signing mechanism was later put into place, so that nodes peering with each other could establish a chain of trust by themselves. This was pretty nice in retrospect (and awesome by today standards), but the main problem is that creating new groups was a slow and painful approval-based process: people often wanted small groups just for themselves, and mailing lists offered the “same” without any approval required.
Having a large open network started to become a big attractor for SPAM, and managing SPAM in a P2P network without authentication is a harder problem to solve than a locally managed mailing list.
running a local server became so easy and cheap, that running mailing list offered local control and almost zero overhead. People that had niche groups started to create mailing lists with open access, and people migrated in flock. Why share your discussions in comp.programming.functional where you could create a mailing list just for your new fancy language? (it’s pretty sad, because I loved the breadth of the discussions). Discussions on general groups became less frequent as most of the interesting ones were on dedicated mailing lists. The trend worsened significantly as forums started to appear, which lowered the barrier to entry to people that didn’t know how to use a mail client properly.
For NNTP for LessWrong, I would think that we have to also take into account that people want to control how their content is displayed/styled. Their own separate blogs easily allow this.
Not just about how it’s displayed/styled. People want control over what kinds of comments get attached to their writing.
I think this is the key driver of the move from open systems to closed: control. The web has succeeded because it clearly defines ownership of a site, and the owner can limit content however they like.
My opinion? Convenience. It’s more convenient for the user to not have to configure a reader, and it’s more convenient for the developer of the forum to not conform to a standard. (edit: I would add ‘mobility’, but that wasn’t an issue until long after the transition)
And its more convenient for the owner’s monetization to not have an easy way to clone their content. Or view it without ads. What Dan said elsewhere about all the major IM players ditching XMPP applies.
[Edited to add: This isn’t even just an NNTP thing. Everything has been absorbed by HTTP these days. Users forgot that the web was not the net, and somewhere along the line developers did too.]
I find it difficult to believe that mere convenience, even amplified with the network effect, would have such a drastic result. As you say, HTTP ate everything. What allowed it to do that?
It’s more appropriate to say that the Web ate everything, and HTTP was dragged along with it. There are well known reasons why the Web almost always wins out, as long as the browsers of the day are technologically capable of doing what you need. (E.g. we used to need Flash and Java applets, but once we no longer did, we got rid of them.)
Even when you’re building a pure service or API, it has to be HTTP or else web clients won’t be able to access it. And once you’ve built an HTTP service, valid reasons to also build a non-HTTP equivalent are rare: high performance or efficiency or full duplex semantics. These are rarely needed.
Finally, there’s a huge pool of coders specializing in web technologies.
HTTP eating everything isn’t so bad. It makes everything much slower than raw TCP, and it forces the horribly broken TLS certificate authority model, but it also has a lot of advantages for many applications. The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
There are well known reasons why the Web almost always wins out
I’ve been asking for them and got nothing but some mumbling about convenience. Why did the Web win out in 90s? Do you think it was a good thing or a bad thing?
or else web clients won’t be able to access it
If you specify that your client is a browser, well, duh. That is not always the case, though.
The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
But you’ve been laying this problem at the feet of the web/HTTP victory. So HTTP is not the problem?
I think it was more in the 00s, but either way, here are some reasons:
The core feature of the Web is the hyperlink. Even the most proprietary web application can allow linking to pieces of its content—and benefits a lot from it. And it can link out, too. I can link to a Facebook post, even if I can’t embed it in my own website. But I can’t link to an email message. And if I include an outgoing link in my email, clicking it will open the web browser, a different application, which is inconvenient.
People often need to use non-personal computers: at work, in internet kiosks, etc. They can’t install new client software on them, but there’s always a browser. So a website is available to more people, at more times and places.
Pieces of web content can be embedded in other websites no matter how they are written. This is a kind of technology that never really existed with desktop applications. If I need to display an ad, or preview a link’s target, or use a third party widget, or embed a news story, I can just put it in an iframe and it will work, and I don’t care how it’s implemented. This is a huge difference from the desktop world: just try to embed a QT widget in a GTK application.
A well written website works on all browsers. At worst, it might look bad, but would still be usable. Client apps need to be written separately for different platforms—Windows, Mac, Linux, and the mobile platforms—and then compiled separately for some architectures or OS versions and tested on each. Cross-platform UI frameworks like QT have shortcomings compared to native toolkits, they don’t support all the platforms there are or look ugly on some of them, and still require require separate compilation and testing for each target.
There’s a much bigger market supply of web developers than of a desktop UI developers. This is a cause, rather than an effect, of the Web’s success: the most popular desktop OS is Windows, and it doesn’t come with an IDE or compiler toolchain; until recent years, Windows native code IDEs/compilers (and some UI toolkits) cost a lot of money; but all you need for Web development is a text editor and a browser. So a lot of people first learned to program by writing web pages with Javascript.
Desktop client apps require update management, which takes a lot of skill, time and money and annoys users. Web apps always have matching server/client versions, no-one runs past versions, and you can easily revert changes or run A/B tests.
Lots of people can’t or won’t install new unfamiliar desktop software. Many techies have taught their families never to download executables from the Web. Many people are just very limited in their computer using abilities and are unable to install software reliably. But everyone is able and willing to click on a hyperlink.
Even when a user can install new software, it’s a trivial inconvenience, which can be psychologically significant. Web applications looking for mass adoption benefit greatly from being easy to try: many more users will spend thirty seconds evaluating whether they want to use a new service if they don’t have to wait two minutes to install software to try it out.
Do you think it was a good thing or a bad thing?
Depends on what the alternative would have been, I guess. It’s easy to imagine something better—a better Web, even—but that doesn’t mean if we would have gotten that something better if the Web had failed.
But you’ve been laying this problem at the feet of the web/HTTP victory. So HTTP is not a problem?
HTTP isn’t a problem. Or rather, it’s not this problem. I may grumble about people using HTTP where it isn’t the technologically correct solution, but that’s not really important and is unrelated in any case.
I don’t think the problem of proprietary services is entirely due to the success of the web. It was encouraged by it, but I don’t think this was the main reason. And I don’t really have a good reason for thinking there’s any simple model for why things turned out this way.
What are the reasons NNTP and Usenet got essentially discarded?
Just a guess: having to install a special client? The browser is everywhere (it comes with the operating system), so you can use web pages on your own computer, at school, at work, at neighbor’s computer, at web cafe, etc. If you have to install your own client, outside of your own computer, you are often not allowed to do it. Also, many people just don’t know how to install programs.
And when most people use browsers, most debates will be there, so the rest will follow.
The amount of people using the Internet and the Web has been increasing geometrically for more than two decades. New users joined new services, perhaps for the reasons I gave in my other comment. Soon enough the existing usenet users were greatly outnumbered, so they went to where the content and the other commenters were.
It’s not an explanation for why new users didn’t join existing services like Usenet, just for why even the people already using Usenet eventually left.
The closest existing solutions are off-site comment management systems like Disqus. But they’re proprietary comment storage providers, not a neutral API. And each such provider has its own model of what comments are and you can’t change it to e.g. add karma if it doesn’t do what you want.
Disqus is just a SaaS provider for a commenting subsystem. The trick is to integrate comments for/from multiple websites into something whole.
Solving such integration and interoperability problems is what standards are for. At some point the Internet decided it didn’t feel like using a standard protocol for discussion anymore, which is why it’s even a problem in the first place.
(http is not a discussion protocol. Not that I think you believe it is, just preempting the obvious objection)
That’s an interesting point. What are the reasons NNTP and Usenet got essentially discarded? Are some of these reasons good ones?
Usenet is just one example of a much bigger trend of the last twenty years: the Net—standardized protocols with multiple interoperable open-source clients and servers, and services being offered either for money or freely—being replaced with the Web—proprietary services locking in your data, letting you talk only to other people who use that same service, forbidding client software modifications, and being ad-supported.
Instant messaging with multi-protocol clients and some open protocols was replaced by many tens of incompatible services, from Google Talk to Whatsapp. Software telephony (VOIP) and videoconferencing, which had some initial success with free services (Jingle, the SIP standards) was replaced by the likes of Skype. Group chat (IRC) has been mostly displaced by services like Slack.
There are many stories like these, and many more examples I could give for each story. The common theme isn’t that the open, interoperable solution used to rule these markets—they didn’t always. It’s that they used to exist, and now they almost never do.
Explaining why this happened is hard. There are various theories but I don’t know if any of them is generally accepted as the single main cause. Maybe there are a lot of things all pushing in the same direction. Here are a few hypotheses:
Open protocols don’t have a corporate owner, so they don’t have a company investing a lot of money in getting people to use them, so they lose out. For-profits don’t invest in open protocol-based services because people can’t be convinced to pay for any service, so the only business model around is ad-based web clients. And an ad-based service can’t allow an open protocol, because if I write an open source client for it, it won’t show the service provider’s ads. (Usenet service used to cost money, sometimes as part of your ISP package.)
The killer feature of any communications network is who you can talk to. With proprietary networks this means who else made the same choice as you; this naturally leads to winner takes all situations, and the winner is incentivized to remain proprietary so it can’t be challenged. Interoperable solutions can’t compete because the proprietary providers will be able to talk to interoperable users and their own proprietary users, but not vice versa.
In the 80s and early 90s, when the first versions of crucial protocols like email were created, the Net was small and populated by smart technical people who cared about each others’ welfare and designed good protocols for everyone’s benefit—and were capable of identifying and choosing good programs to use. Today, the Web has three to four orders of magnitude more users (and a similar increase in programmers), and they aren’t any more technologically savvy, intelligent or altruistic than the general population. Somewhere along the way, better-marketed solutions started reliably winning out over solutions with superior technology, features and UX. Today, objective product quality and market success may be completely uncorrelated.
There are other possibilities, too, which I don’t have the time to note right now. This is late in the night for me, so I apologize if this comment is a bit incoherent.
The Web, of course, if nothing but a standardized protocol with multiple interoperable open-source clients and servers, and services being offered either for money or freely. I am not sure why would you want a lot of different protocols.
The net’s big thing is that it’s dumb and all the intelligence is at the endpoints (compare to the telephone network). The web keeps that vital feature.
That’s not a feature of the web as opposed to the ’net. That’s business practices and they are indifferent to what your underlying protocol is. For example you mention VOIP and that’s not the “web”.
Never do? Really? I think you’re overreaching in a major way. Nothing happened to the two biggies—HTTP and email. There are incompatible chat networks? So what, big deal...
Sigh. HTTP? An ad-based service would prefer a welded-shut client, but in practice the great majority of ads are displayed in browsers which are perfectly capable of using ad-blockers. Somehow Google survives.
No, not really. Here: people like money. Also: people are willing to invest money (which can be converted into time and effort) if they think it will make them more money. TANSTAAFL and all that...
This is like asking why, before HTTP, we needed different protocols for email and IRC and usenet, when we already had standardized TCP underneath. HTTP is an agnostic communication protocol like TCP, not an application protocol like email.
The application-level service exposed by modern websites is very rarely—and never unintentionally or ‘by default’ - a standardized (i.e. documented) protocol. You can’t realistically write a new client for Facebook, and even if you did, it would break every other week as Facebook changed their site.
I use the example of Facebook advisedly. They expose a limited API, which deliberately doesn’t include all the bits they don’t want you to use (like Messenger), and is further restricted by TOS which explicitly forbids clients that would replace a major part of Facebook itself.
That’s true. But another vital feature of the net is that most traffic runs over standardized, open protocols.
Imagine a world where nothing was standardized above the IP layer, or even merely nothing about UDP, TCP and ICMP. No DNS, email, NFS, SSH, LDAP, none of the literally thousands of open protocols that make the Net as we know it work. Just proprietary applications, each of which can only talk to itself. That’s the world of the web applications.
(Not web content, which is a good concept, with hyperlinks and so forth, but dynamic web applications like facebook or gmail.)
I mentioned VOIP exactly because I was talking about a more general process, of which the Web—or rather modern web apps—is only one example.
The business practice of ad-driven revenue cares about your underlying protocol. It requires restricting the user’s control over their experience—similarly to DRM—because few users would willingly choose to see ads if there was a simple switch in the client software to turn them off. And that’s what would happen with an open protocol with competing open source clients.
Email is pretty much the only survivor (despite inroads by webmail services). That’s why I said “almost” never do. And HTTP isn’t an application protocol. Can you think of any example other than email?
Google survives because the great majority of people don’t use ad blockers. Smaller sites don’t always survive and many of them are now installing ad blocker blockers. Many people have been predicting the implosion of a supposed ad revenue bubble for many years now; I don’t have an opinion on the subject, but it clearly hasn’t happened yet.
That doesn’t explain the shift over time from business models where users paid for service, to ad-supported revenue. On the other hand, if you can explain that shift, then it predicts that ad-supported services will eschew open protocols.
Huh? HTTP is certainly an application protocol: you have a web client talking to a web server. The application delivers web pages to the client. It is by no means an “agnostic” protocol. You can, of course, use it to deliver binary blobs, but so can email.
The thing is, because the web ate everything, we’re just moving one meta level up. You can argue that HTTP is supplanting TCP/IP and a browser is supplanting OS. We’re building layers upon layers matryoshka-style. But that’s a bigger and a different discussion than talking about interoperability. HTTP is still an open protocol with open-source implementations available at both ends.
You are very persistently ignoring reality. The great majority of ads are delivered in browsers which are NOT restricting the “user’s control over their experience” and which are freely available as “competing open source clients”.
Sure. FTP for example.
Why is that a problem? If they can’t survive they shouldn’t.
The before-the-web internet did not have a business model where users paid for service. It pretty much had no business model at all.
HTTP is used for many things, many of them unrelated to the Web. Due to its popularity, a great many things have been built on top of it.
The point I was making is this: when a server exposes an HTTP API, that API is the protocol, and HTTP is a transport just like TCP underneath it and TLS on top of it. The equivalent of a protocol like SMTP on top of TCP is a documented API on top of HTTP. The use of different terms confused this conversation.
My point is, you can’t interoperate with Facebook or Gmail or Reddit just by implementing HTTP; you need to implement an API or “protocol” to talk to them. And if they don’t have one—either deliberately, or because their HTTP traffic just wasn’t designed for interoperability—then there is no open “protocol”.
The great majority of web ads are actually displayed, keeping revenue flowing. They’re not removed by adblockers. Even if everyone installed adblockers, I believe ads would win the ensuing arms race. It’s much easier to scramble software, mix the ads with the rest of the web site so they can’t be removed without breaking the whole site, than to write a program to unscramble all websites. (In contrast to the problem of unscrambling a specific given website, which is pretty easy—as shown by the history of software copy protection.)
That is only true because the server provides the client software, i.e. the website’s client components. That the browsers are open source is as irrelevant as that the client OS is. The actual application that’s trying to enforce showing the ads is the website that runs in the browser.
The amount of use FTP sees today is completely negligible compared to its market share twenty years ago. But I agree, file servers (and p2p file sharing) are good examples of an area where most protocols are documented and interoperable. (Although in the case of SMB/CIFS, they were documented only after twenty years of hard struggle.)
I didn’t say it was a problem…
That’s not true. There were many services being offered for money. Usenet was one. Email was another, before the advent of ad-supported webmail.
This from here seems pretty accurate for Usenet:
For NNTP for LessWrong, I would think that we have to also take into account that people want to control how their content is displayed/styled. Their own separate blogs easily allow this.
Not just about how it’s displayed/styled. People want control over what kinds of comments get attached to their writing.
I think this is the key driver of the move from open systems to closed: control. The web has succeeded because it clearly defines ownership of a site, and the owner can limit content however they like.
My opinion? Convenience. It’s more convenient for the user to not have to configure a reader, and it’s more convenient for the developer of the forum to not conform to a standard. (edit: I would add ‘mobility’, but that wasn’t an issue until long after the transition)
And its more convenient for the owner’s monetization to not have an easy way to clone their content. Or view it without ads. What Dan said elsewhere about all the major IM players ditching XMPP applies.
[Edited to add: This isn’t even just an NNTP thing. Everything has been absorbed by HTTP these days. Users forgot that the web was not the net, and somewhere along the line developers did too.]
I find it difficult to believe that mere convenience, even amplified with the network effect, would have such a drastic result. As you say, HTTP ate everything. What allowed it to do that?
It’s more appropriate to say that the Web ate everything, and HTTP was dragged along with it. There are well known reasons why the Web almost always wins out, as long as the browsers of the day are technologically capable of doing what you need. (E.g. we used to need Flash and Java applets, but once we no longer did, we got rid of them.)
Even when you’re building a pure service or API, it has to be HTTP or else web clients won’t be able to access it. And once you’ve built an HTTP service, valid reasons to also build a non-HTTP equivalent are rare: high performance or efficiency or full duplex semantics. These are rarely needed.
Finally, there’s a huge pool of coders specializing in web technologies.
HTTP eating everything isn’t so bad. It makes everything much slower than raw TCP, and it forces the horribly broken TLS certificate authority model, but it also has a lot of advantages for many applications. The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
I’ve been asking for them and got nothing but some mumbling about convenience. Why did the Web win out in 90s? Do you think it was a good thing or a bad thing?
If you specify that your client is a browser, well, duh. That is not always the case, though.
But you’ve been laying this problem at the feet of the web/HTTP victory. So HTTP is not the problem?
I think it was more in the 00s, but either way, here are some reasons:
The core feature of the Web is the hyperlink. Even the most proprietary web application can allow linking to pieces of its content—and benefits a lot from it. And it can link out, too. I can link to a Facebook post, even if I can’t embed it in my own website. But I can’t link to an email message. And if I include an outgoing link in my email, clicking it will open the web browser, a different application, which is inconvenient.
People often need to use non-personal computers: at work, in internet kiosks, etc. They can’t install new client software on them, but there’s always a browser. So a website is available to more people, at more times and places.
Pieces of web content can be embedded in other websites no matter how they are written. This is a kind of technology that never really existed with desktop applications. If I need to display an ad, or preview a link’s target, or use a third party widget, or embed a news story, I can just put it in an iframe and it will work, and I don’t care how it’s implemented. This is a huge difference from the desktop world: just try to embed a QT widget in a GTK application.
A well written website works on all browsers. At worst, it might look bad, but would still be usable. Client apps need to be written separately for different platforms—Windows, Mac, Linux, and the mobile platforms—and then compiled separately for some architectures or OS versions and tested on each. Cross-platform UI frameworks like QT have shortcomings compared to native toolkits, they don’t support all the platforms there are or look ugly on some of them, and still require require separate compilation and testing for each target.
There’s a much bigger market supply of web developers than of a desktop UI developers. This is a cause, rather than an effect, of the Web’s success: the most popular desktop OS is Windows, and it doesn’t come with an IDE or compiler toolchain; until recent years, Windows native code IDEs/compilers (and some UI toolkits) cost a lot of money; but all you need for Web development is a text editor and a browser. So a lot of people first learned to program by writing web pages with Javascript.
Desktop client apps require update management, which takes a lot of skill, time and money and annoys users. Web apps always have matching server/client versions, no-one runs past versions, and you can easily revert changes or run A/B tests.
Lots of people can’t or won’t install new unfamiliar desktop software. Many techies have taught their families never to download executables from the Web. Many people are just very limited in their computer using abilities and are unable to install software reliably. But everyone is able and willing to click on a hyperlink.
Even when a user can install new software, it’s a trivial inconvenience, which can be psychologically significant. Web applications looking for mass adoption benefit greatly from being easy to try: many more users will spend thirty seconds evaluating whether they want to use a new service if they don’t have to wait two minutes to install software to try it out.
Depends on what the alternative would have been, I guess. It’s easy to imagine something better—a better Web, even—but that doesn’t mean if we would have gotten that something better if the Web had failed.
HTTP isn’t a problem. Or rather, it’s not this problem. I may grumble about people using HTTP where it isn’t the technologically correct solution, but that’s not really important and is unrelated in any case.
I don’t think the problem of proprietary services is entirely due to the success of the web. It was encouraged by it, but I don’t think this was the main reason. And I don’t really have a good reason for thinking there’s any simple model for why things turned out this way.
Just a guess: having to install a special client? The browser is everywhere (it comes with the operating system), so you can use web pages on your own computer, at school, at work, at neighbor’s computer, at web cafe, etc. If you have to install your own client, outside of your own computer, you are often not allowed to do it. Also, many people just don’t know how to install programs.
And when most people use browsers, most debates will be there, so the rest will follow.
That doesn’t explain why people abandoned Usenet. They had the clients installed, they just stopped using them.
The amount of people using the Internet and the Web has been increasing geometrically for more than two decades. New users joined new services, perhaps for the reasons I gave in my other comment. Soon enough the existing usenet users were greatly outnumbered, so they went to where the content and the other commenters were.
Yes, the network effect. But is that all?
It’s not an explanation for why new users didn’t join existing services like Usenet, just for why even the people already using Usenet eventually left.
The e-mail client that came pre-installed with Windows 95 and several later Windowses also included newsgroup functionality.