I find it difficult to believe that mere convenience, even amplified with the network effect, would have such a drastic result. As you say, HTTP ate everything. What allowed it to do that?
It’s more appropriate to say that the Web ate everything, and HTTP was dragged along with it. There are well known reasons why the Web almost always wins out, as long as the browsers of the day are technologically capable of doing what you need. (E.g. we used to need Flash and Java applets, but once we no longer did, we got rid of them.)
Even when you’re building a pure service or API, it has to be HTTP or else web clients won’t be able to access it. And once you’ve built an HTTP service, valid reasons to also build a non-HTTP equivalent are rare: high performance or efficiency or full duplex semantics. These are rarely needed.
Finally, there’s a huge pool of coders specializing in web technologies.
HTTP eating everything isn’t so bad. It makes everything much slower than raw TCP, and it forces the horribly broken TLS certificate authority model, but it also has a lot of advantages for many applications. The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
There are well known reasons why the Web almost always wins out
I’ve been asking for them and got nothing but some mumbling about convenience. Why did the Web win out in 90s? Do you think it was a good thing or a bad thing?
or else web clients won’t be able to access it
If you specify that your client is a browser, well, duh. That is not always the case, though.
The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
But you’ve been laying this problem at the feet of the web/HTTP victory. So HTTP is not the problem?
I think it was more in the 00s, but either way, here are some reasons:
The core feature of the Web is the hyperlink. Even the most proprietary web application can allow linking to pieces of its content—and benefits a lot from it. And it can link out, too. I can link to a Facebook post, even if I can’t embed it in my own website. But I can’t link to an email message. And if I include an outgoing link in my email, clicking it will open the web browser, a different application, which is inconvenient.
People often need to use non-personal computers: at work, in internet kiosks, etc. They can’t install new client software on them, but there’s always a browser. So a website is available to more people, at more times and places.
Pieces of web content can be embedded in other websites no matter how they are written. This is a kind of technology that never really existed with desktop applications. If I need to display an ad, or preview a link’s target, or use a third party widget, or embed a news story, I can just put it in an iframe and it will work, and I don’t care how it’s implemented. This is a huge difference from the desktop world: just try to embed a QT widget in a GTK application.
A well written website works on all browsers. At worst, it might look bad, but would still be usable. Client apps need to be written separately for different platforms—Windows, Mac, Linux, and the mobile platforms—and then compiled separately for some architectures or OS versions and tested on each. Cross-platform UI frameworks like QT have shortcomings compared to native toolkits, they don’t support all the platforms there are or look ugly on some of them, and still require require separate compilation and testing for each target.
There’s a much bigger market supply of web developers than of a desktop UI developers. This is a cause, rather than an effect, of the Web’s success: the most popular desktop OS is Windows, and it doesn’t come with an IDE or compiler toolchain; until recent years, Windows native code IDEs/compilers (and some UI toolkits) cost a lot of money; but all you need for Web development is a text editor and a browser. So a lot of people first learned to program by writing web pages with Javascript.
Desktop client apps require update management, which takes a lot of skill, time and money and annoys users. Web apps always have matching server/client versions, no-one runs past versions, and you can easily revert changes or run A/B tests.
Lots of people can’t or won’t install new unfamiliar desktop software. Many techies have taught their families never to download executables from the Web. Many people are just very limited in their computer using abilities and are unable to install software reliably. But everyone is able and willing to click on a hyperlink.
Even when a user can install new software, it’s a trivial inconvenience, which can be psychologically significant. Web applications looking for mass adoption benefit greatly from being easy to try: many more users will spend thirty seconds evaluating whether they want to use a new service if they don’t have to wait two minutes to install software to try it out.
Do you think it was a good thing or a bad thing?
Depends on what the alternative would have been, I guess. It’s easy to imagine something better—a better Web, even—but that doesn’t mean if we would have gotten that something better if the Web had failed.
But you’ve been laying this problem at the feet of the web/HTTP victory. So HTTP is not a problem?
HTTP isn’t a problem. Or rather, it’s not this problem. I may grumble about people using HTTP where it isn’t the technologically correct solution, but that’s not really important and is unrelated in any case.
I don’t think the problem of proprietary services is entirely due to the success of the web. It was encouraged by it, but I don’t think this was the main reason. And I don’t really have a good reason for thinking there’s any simple model for why things turned out this way.
I find it difficult to believe that mere convenience, even amplified with the network effect, would have such a drastic result. As you say, HTTP ate everything. What allowed it to do that?
It’s more appropriate to say that the Web ate everything, and HTTP was dragged along with it. There are well known reasons why the Web almost always wins out, as long as the browsers of the day are technologically capable of doing what you need. (E.g. we used to need Flash and Java applets, but once we no longer did, we got rid of them.)
Even when you’re building a pure service or API, it has to be HTTP or else web clients won’t be able to access it. And once you’ve built an HTTP service, valid reasons to also build a non-HTTP equivalent are rare: high performance or efficiency or full duplex semantics. These are rarely needed.
Finally, there’s a huge pool of coders specializing in web technologies.
HTTP eating everything isn’t so bad. It makes everything much slower than raw TCP, and it forces the horribly broken TLS certificate authority model, but it also has a lot of advantages for many applications. The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
I’ve been asking for them and got nothing but some mumbling about convenience. Why did the Web win out in 90s? Do you think it was a good thing or a bad thing?
If you specify that your client is a browser, well, duh. That is not always the case, though.
But you’ve been laying this problem at the feet of the web/HTTP victory. So HTTP is not the problem?
I think it was more in the 00s, but either way, here are some reasons:
The core feature of the Web is the hyperlink. Even the most proprietary web application can allow linking to pieces of its content—and benefits a lot from it. And it can link out, too. I can link to a Facebook post, even if I can’t embed it in my own website. But I can’t link to an email message. And if I include an outgoing link in my email, clicking it will open the web browser, a different application, which is inconvenient.
People often need to use non-personal computers: at work, in internet kiosks, etc. They can’t install new client software on them, but there’s always a browser. So a website is available to more people, at more times and places.
Pieces of web content can be embedded in other websites no matter how they are written. This is a kind of technology that never really existed with desktop applications. If I need to display an ad, or preview a link’s target, or use a third party widget, or embed a news story, I can just put it in an iframe and it will work, and I don’t care how it’s implemented. This is a huge difference from the desktop world: just try to embed a QT widget in a GTK application.
A well written website works on all browsers. At worst, it might look bad, but would still be usable. Client apps need to be written separately for different platforms—Windows, Mac, Linux, and the mobile platforms—and then compiled separately for some architectures or OS versions and tested on each. Cross-platform UI frameworks like QT have shortcomings compared to native toolkits, they don’t support all the platforms there are or look ugly on some of them, and still require require separate compilation and testing for each target.
There’s a much bigger market supply of web developers than of a desktop UI developers. This is a cause, rather than an effect, of the Web’s success: the most popular desktop OS is Windows, and it doesn’t come with an IDE or compiler toolchain; until recent years, Windows native code IDEs/compilers (and some UI toolkits) cost a lot of money; but all you need for Web development is a text editor and a browser. So a lot of people first learned to program by writing web pages with Javascript.
Desktop client apps require update management, which takes a lot of skill, time and money and annoys users. Web apps always have matching server/client versions, no-one runs past versions, and you can easily revert changes or run A/B tests.
Lots of people can’t or won’t install new unfamiliar desktop software. Many techies have taught their families never to download executables from the Web. Many people are just very limited in their computer using abilities and are unable to install software reliably. But everyone is able and willing to click on a hyperlink.
Even when a user can install new software, it’s a trivial inconvenience, which can be psychologically significant. Web applications looking for mass adoption benefit greatly from being easy to try: many more users will spend thirty seconds evaluating whether they want to use a new service if they don’t have to wait two minutes to install software to try it out.
Depends on what the alternative would have been, I guess. It’s easy to imagine something better—a better Web, even—but that doesn’t mean if we would have gotten that something better if the Web had failed.
HTTP isn’t a problem. Or rather, it’s not this problem. I may grumble about people using HTTP where it isn’t the technologically correct solution, but that’s not really important and is unrelated in any case.
I don’t think the problem of proprietary services is entirely due to the success of the web. It was encouraged by it, but I don’t think this was the main reason. And I don’t really have a good reason for thinking there’s any simple model for why things turned out this way.