Changing UIs has costs to users. So does charging for your service. Is charging for your service unethical? Think about the vast amount of frustration caused by people not having enough money, just so the company can shovel even more money onto already overpaid CEOs. (Want to modus again?)
I do think companies should seriously consider the disruption UI changes cause, just like they seriously consider the disruption of price increases, and often it will make sense for the company to put in extra development to save their users’ frustration. For example, for large changes like the ~2011 Gmail redesign you can have a period of offering both UIs with a toggle to switch between them. (And stats on how people use that toggle give you very useful information about how the redesign is working.)
Companies that followed your suggestions would, over the years, look very dated. Their UIs wouldn’t be missing features, exactly, but their features would be clunky, having been patched onto UIs that were designed around an earlier understanding of the problem. As the world changed, and which features were most useful to users changed, the UI would keep emphasizing whatever was originally most important. Users would leave for products offered by new companies that better fit their needs, and the company would especially have a hard time getting new users.
Companies that followed your suggestions would, over the years, look very dated.
“Dated” is not a problem unless you treat UX design like fashion. UIs don’t rust.
their features would be clunky, having been patched onto UIs that were designed around an earlier understanding of the problem
The “earlier understanding” of many problems in UX design was more correct. Knowledge and understanding in the industry has, in many cases, degenerated, not improved.
As the world changed, and which features were most useful to users changed, the UI would keep emphasizing whatever was originally most important. Users would leave for products offered by new companies that better fit their needs, and the company would especially have a hard time getting new users.
Yes, this is certainly the story that designers, engineers, and managers tell themselves. Sometimes it’s even true. Often it’s a lie, to cover the design-as-fashion dynamic.
Changing UIs has costs to users. So does charging for your service. Is charging for your service unethical? Think about the vast amount of frustration caused by people not having enough money, just so the company can shovel even more money onto already overpaid CEOs. (Want to modus again?)
Charging for your service isn’t unethical—though overcharging certainly might be! If companies didn’t charge for their service, they couldn’t provide it (and in cases where this isn’t true, the ethics of charging should certainly be examined). So, yes, once again.
But that’s not the important point. Consider this thought experiment: how much value, translated into money, does the company gain from constant, unnecessary[1] UI changes? Does the company even gain anything from this, or only the designers within it? If the company does gain some value from it, how much of this value is merely from not losing in zero-sum signaling/fashion races with other companies in the industry? And, finally, having arrived at a figure—how does this compare with the aggregate value lost by users?
The entire exercise is vastly negative-sum. It is destructive of value on a massive scale. Nothing even remotely like “charging money for products or services” can compare to it. Every CEO in the world can go and buy themselves five additional yachts, right now, and raise prices accordingly, and if in exchange this nonsense of “UX design as fashion” dies forever, I will consider that to be an astoundingly favorable bargain.
The first UI isn’t “rusted”, but users looking at it will have a low impression of it and will prefer competing products with newer UIs. I don’t think fashion is the main motivator here, but it is real and you can’t make it go away just by unilaterally stopping playing. (I mean I can but I’m an individual running a personal website, not a company.)
The “earlier understanding” of many problems in UX design was more correct. Knowledge and understanding in the industry has, in many cases, degenerated, not improved.
How so? I can think of cases where earlier UX was a better fit for experienced users and newer UXes are “dumbed down”, is that what you mean?
The entire exercise is vastly negative-sum. It is destructive of value on a massive scale.
Let’s take a case where all the externalities should be internalized: internal tooling at a well run company. I use many internal UIs in my day-to-day work, and every so often one of them is reworked. There’s not much in the way of fashion here, since it’s internal, but there are still UI changes. The kind of general “let’s redo the UI and stop being stuck in a local maximum” is the main motivation, and I’m generally pretty happy with it.
I don’t think the public-facing version is that different. If there was massive value destruction then users would move to software that changed UI less.
users looking at it will have a low impression of it
Mistakenly, of course. This is a well-attested problem, and is fundamental to this entire topic of discussion.
I don’t think fashion is the main motivator here
No, the halo effect is the main motivator.
you can’t make it go away just by unilaterally stopping playing
I never said that you could. (Although, in fact, I will now say that you can do so to a much greater extent than people usually assume, though not, of course, completely.)
The “earlier understanding” of many problems in UX design was more correct. Knowledge and understanding in the industry has, in many cases, degenerated, not improved.
How so? I can think of cases where earlier UX was a better fit for experienced users and newer UXes are “dumbed down”, is that what you mean?
In part. A full treatment of this question is beyond the scope of a tangential comment thread, though indeed the question is worthy of a full treatment. I will have to decline to elaborate for now.
If there was massive value destruction then users would move to software that changed UI less.
In practice this is often impossible. For example, how do I move to a browser with which I can effectively browse every website, but whose UI stays static? I can’t (in large part because of anti-competitive behavior and general shadiness on the part of Google, in part because of other trends).
The fact is that such simplistic, spherical-cow models of user behavior and systemic incentives fail to capture a large number and scope of “Molochian” dynamics in the tech industry (and the world at large).
users looking at it will have a low impression of it
Mistakenly, of course. This is a well-attested problem, and is fundamental to this entire topic of discussion.
I’m not sure that this is mistaken: companies that can keep their UI current can probably, in general, make better software. This probably only holds for large companies: since small companies face more of a choice of what to prioritize while large companies that look like they’re from 2005 are more likely to be environments that can’t get anything done.
I’m generally pretty retrogrouch, and do often prefer older interfaces (I live on the command line, code in emacs, etc). But I also recognize that different interfaces work well for different people and as more people start using tech I get farther and farther from the norm.
you can’t make it go away just by unilaterally stopping playing
I never said that you could.
That was how I interpreted your suggestion that UX people start to follow a “change UIs only when functionality demands”. Anyone who tried to do the “responsible” thing would lose out to less responsible folks. Even if you got a large group of UX people to refuse work they considered to be changing UIs for fashion, companies are in a much stronger position since the barrier to entry for UX work is relatively low.
how do I move to a browser with which I can effectively browse every website, but whose UI stays static? I can’t.
The rendering engines of Chrome/Edge/Opera (Blink), Safari (WebKit), and Firefox (Gecko) are all open source and there are manyprojects that wrap their own UI around a rendering engine. The amount of work is really not that much, especially on mobile (where iOS requires you to take this approach). If this was something that many people cared about it would not be hard for open source projects to take it on, or companies to sell it.
That no one is prioritizing a UI-stable browser really is strong evidence that there’s not much demand.
in large part because of anti-competitive behavior and general shadiness on the part of Google
companies that can keep their UI current can probably, in general, make better software
To the contrary: companies that update their UI to be “current” probably, in general, make worse software (and not only in virtue of the fact that the UI updates often directly make the software worse).
I’m generally pretty retrogrouch, and do often prefer older interfaces (I live on the command line, code in emacs, etc). But I also recognize that different interfaces work well for different people …
Do they? It’s funny; I’ve seen this sort of sentiment quite a few times. It’s always either “well, actually, I like older UIs, but newer UIs work better [in unspecified ways] for some people [but not me]”, or “I prefer newer UIs, because they’re [vague handwaving about ‘modern’, ‘current’, ‘clean’, ‘not outdated’, etc.,]”. Much less frequent, somehow—to the point of being almost totally absent from my experience—are sentiments along the lines of “I prefer modern UIs, for the following specific reasons; they are superior to older UIs, which have the following specific flaws (which modern UIs lack)”.
That was how I interpreted your suggestion that UX people start to follow a “change UIs only when functionality demands”. Anyone who tried to do the “responsible” thing would lose out to less responsible folks. Even if you got a large group of UX people to refuse work they considered to be changing UIs for fashion, companies are in a much stronger position since the barrier to entry for UX work is relatively low.
But note that this objection essentially concedes the point: that the pressure toward “modernization” of UX design is a Molochian race to the bottom.
The rendering engines of Chrome/Edge/Opera (Blink), Safari (WebKit), and Firefox (Gecko) are all open source and there are many projects that wrap their own UI around a rendering engine. The amount of work is really not that much, especially on mobile (where iOS requires you to take this approach).
[emphasis mine]
I have a hard time believing that you are serious, here. I find this to be an absurd claim.
in large part because of anti-competitive behavior and general shadiness on the part of Google
Not sure what you’re referring to here?
Once again, it is difficult for me to believe that you actually don’t know what I’m talking about—you would have to have spent the last five years, at the very least, not paying any attention to developments in web technologies. But if that’s so, then perhaps the inferential distance between us is too great.
Much less frequent, somehow—to the point of being almost totally absent from my experience—are sentiments along the lines of “I prefer modern UIs, for the following specific reasons; they are superior to older UIs, which have the following specific flaws (which modern UIs lack)”.
I think maybe what’s going on is that people who are good at talking about what they like generally prefer older approaches? But if you run usability tests, focus groups, A/B tests, etc you see users do better with modern UIs.
But note that this objection essentially concedes the point: that the pressure toward “modernization” of UX design is a Molochian race to the bottom.
I do think there’s a coordination failure here, as there is in any signaling situation. I think it explains less of what’s going on than you do, and I also don’t think getting UX people to agree on a code of ethics that prohibited non-feature-driven UI changes would be useful. (I also can’t tell if that’s a proposal you’re still pushing.)
The amount of work is really not that much
I have a hard time believing that you are serious, here. I find this to be an absurd claim.
To be specific, I’m estimating that the amount of work required to build and maintain a simple and constant UI wrapper around a browser rendering engine is about one full time experienced engineer for two weeks to build and then 10% of their time (usually 0% but occasionally a lot of work when the underlying implementation changes) going forward. The interface between the engine and the UI is pretty clean. For example, have a look at Apple’s documentation for WebView:
A WebView object is intended to support most features you
would expect in a web browser except that it doesn’t implement
the specific user interface for those features. You are responsible
for implementing the user interface objects such as status bars,
toolbars, buttons, and text fields. For example, a WebView object
manages a back-forward list by default, and has goBack(_:) and
goForward(_:) action methods. It is your responsibility to create
the buttons that would send theses action messages.
The situation on Android is similar. Hundreds of apps, including many single-developer ones, use WebView to bring a web browser into their app, with the UI fully under their control.
in large part because of anti-competitive behavior and general shadiness on the part of Google
Not sure what you’re referring to here?
Once again, it is difficult for me to believe that you actually don’t know what I’m talking about—you would have to have spent the last five years, at the very least, not paying any attention to developments in web technologies.
I’ve been paying a lot of attention to this, since that’s been the core of what I’ve worked on since 2012: first on mod_pagespeed and now on GPT. When I look back at the last five years of web technology changes the main things I see (not exhaustive, just what I remember) are:
SPDY, QUIC, HTTP/2, HTTP/3, TLS 1.3 (and everything moved to HTTPS post-Snowden)
Most sites can develop only for evergreen browsers (no dealing with IE8 etc)
Service workers, web workers
WebAssembly
Browsers blocking identity in third-party contexts
Changing UIs has costs to users. So does charging for your service. Is charging for your service unethical? Think about the vast amount of frustration caused by people not having enough money, just so the company can shovel even more money onto already overpaid CEOs. (Want to modus again?)
I do think companies should seriously consider the disruption UI changes cause, just like they seriously consider the disruption of price increases, and often it will make sense for the company to put in extra development to save their users’ frustration. For example, for large changes like the ~2011 Gmail redesign you can have a period of offering both UIs with a toggle to switch between them. (And stats on how people use that toggle give you very useful information about how the redesign is working.)
Companies that followed your suggestions would, over the years, look very dated. Their UIs wouldn’t be missing features, exactly, but their features would be clunky, having been patched onto UIs that were designed around an earlier understanding of the problem. As the world changed, and which features were most useful to users changed, the UI would keep emphasizing whatever was originally most important. Users would leave for products offered by new companies that better fit their needs, and the company would especially have a hard time getting new users.
“Dated” is not a problem unless you treat UX design like fashion. UIs don’t rust.
The “earlier understanding” of many problems in UX design was more correct. Knowledge and understanding in the industry has, in many cases, degenerated, not improved.
Yes, this is certainly the story that designers, engineers, and managers tell themselves. Sometimes it’s even true. Often it’s a lie, to cover the design-as-fashion dynamic.
Charging for your service isn’t unethical—though overcharging certainly might be! If companies didn’t charge for their service, they couldn’t provide it (and in cases where this isn’t true, the ethics of charging should certainly be examined). So, yes, once again.
But that’s not the important point. Consider this thought experiment: how much value, translated into money, does the company gain from constant, unnecessary[1] UI changes? Does the company even gain anything from this, or only the designers within it? If the company does gain some value from it, how much of this value is merely from not losing in zero-sum signaling/fashion races with other companies in the industry? And, finally, having arrived at a figure—how does this compare with the aggregate value lost by users?
The entire exercise is vastly negative-sum. It is destructive of value on a massive scale. Nothing even remotely like “charging money for products or services” can compare to it. Every CEO in the world can go and buy themselves five additional yachts, right now, and raise prices accordingly, and if in exchange this nonsense of “UX design as fashion” dies forever, I will consider that to be an astoundingly favorable bargain.
That is, changes not motivated by specific usability flaws, specific feature additions, etc.
“Dated” is a problem for companies because users care about it in selecting products. Compare:
Original GMail: https://upload.wikimedia.org/wikipedia/en/6/67/Gmail_2004.png
Current GMail: https://upload.wikimedia.org/wikipedia/en/1/1b/Gmail_inbox_in_Japanese.png
The first UI isn’t “rusted”, but users looking at it will have a low impression of it and will prefer competing products with newer UIs. I don’t think fashion is the main motivator here, but it is real and you can’t make it go away just by unilaterally stopping playing. (I mean I can but I’m an individual running a personal website, not a company.)
How so? I can think of cases where earlier UX was a better fit for experienced users and newer UXes are “dumbed down”, is that what you mean?
Let’s take a case where all the externalities should be internalized: internal tooling at a well run company. I use many internal UIs in my day-to-day work, and every so often one of them is reworked. There’s not much in the way of fashion here, since it’s internal, but there are still UI changes. The kind of general “let’s redo the UI and stop being stuck in a local maximum” is the main motivation, and I’m generally pretty happy with it.
I don’t think the public-facing version is that different. If there was massive value destruction then users would move to software that changed UI less.
Mistakenly, of course. This is a well-attested problem, and is fundamental to this entire topic of discussion.
No, the halo effect is the main motivator.
I never said that you could. (Although, in fact, I will now say that you can do so to a much greater extent than people usually assume, though not, of course, completely.)
In part. A full treatment of this question is beyond the scope of a tangential comment thread, though indeed the question is worthy of a full treatment. I will have to decline to elaborate for now.
In practice this is often impossible. For example, how do I move to a browser with which I can effectively browse every website, but whose UI stays static? I can’t (in large part because of anti-competitive behavior and general shadiness on the part of Google, in part because of other trends).
The fact is that such simplistic, spherical-cow models of user behavior and systemic incentives fail to capture a large number and scope of “Molochian” dynamics in the tech industry (and the world at large).
I’m not sure that this is mistaken: companies that can keep their UI current can probably, in general, make better software. This probably only holds for large companies: since small companies face more of a choice of what to prioritize while large companies that look like they’re from 2005 are more likely to be environments that can’t get anything done.
I’m generally pretty retrogrouch, and do often prefer older interfaces (I live on the command line, code in emacs, etc). But I also recognize that different interfaces work well for different people and as more people start using tech I get farther and farther from the norm.
That was how I interpreted your suggestion that UX people start to follow a “change UIs only when functionality demands”. Anyone who tried to do the “responsible” thing would lose out to less responsible folks. Even if you got a large group of UX people to refuse work they considered to be changing UIs for fashion, companies are in a much stronger position since the barrier to entry for UX work is relatively low.
The rendering engines of Chrome/Edge/Opera (Blink), Safari (WebKit), and Firefox (Gecko) are all open source and there are many projects that wrap their own UI around a rendering engine. The amount of work is really not that much, especially on mobile (where iOS requires you to take this approach). If this was something that many people cared about it would not be hard for open source projects to take it on, or companies to sell it. That no one is prioritizing a UI-stable browser really is strong evidence that there’s not much demand.
Not sure what you’re referring to here?
To the contrary: companies that update their UI to be “current” probably, in general, make worse software (and not only in virtue of the fact that the UI updates often directly make the software worse).
Do they? It’s funny; I’ve seen this sort of sentiment quite a few times. It’s always either “well, actually, I like older UIs, but newer UIs work better [in unspecified ways] for some people [but not me]”, or “I prefer newer UIs, because they’re [vague handwaving about ‘modern’, ‘current’, ‘clean’, ‘not outdated’, etc.,]”. Much less frequent, somehow—to the point of being almost totally absent from my experience—are sentiments along the lines of “I prefer modern UIs, for the following specific reasons; they are superior to older UIs, which have the following specific flaws (which modern UIs lack)”.
But note that this objection essentially concedes the point: that the pressure toward “modernization” of UX design is a Molochian race to the bottom.
[emphasis mine]
I have a hard time believing that you are serious, here. I find this to be an absurd claim.
Once again, it is difficult for me to believe that you actually don’t know what I’m talking about—you would have to have spent the last five years, at the very least, not paying any attention to developments in web technologies. But if that’s so, then perhaps the inferential distance between us is too great.
I think maybe what’s going on is that people who are good at talking about what they like generally prefer older approaches? But if you run usability tests, focus groups, A/B tests, etc you see users do better with modern UIs.
I do think there’s a coordination failure here, as there is in any signaling situation. I think it explains less of what’s going on than you do, and I also don’t think getting UX people to agree on a code of ethics that prohibited non-feature-driven UI changes would be useful. (I also can’t tell if that’s a proposal you’re still pushing.)
To be specific, I’m estimating that the amount of work required to build and maintain a simple and constant UI wrapper around a browser rendering engine is about one full time experienced engineer for two weeks to build and then 10% of their time (usually 0% but occasionally a lot of work when the underlying implementation changes) going forward. The interface between the engine and the UI is pretty clean. For example, have a look at Apple’s documentation for WebView:
The situation on Android is similar. Hundreds of apps, including many single-developer ones, use
WebView
to bring a web browser into their app, with the UI fully under their control.I’ve been paying a lot of attention to this, since that’s been the core of what I’ve worked on since 2012: first on mod_pagespeed and now on GPT. When I look back at the last five years of web technology changes the main things I see (not exhaustive, just what I remember) are:
SPDY, QUIC, HTTP/2, HTTP/3, TLS 1.3 (and everything moved to HTTPS post-Snowden)
Most sites can develop only for evergreen browsers (no dealing with IE8 etc)
Service workers, web workers
WebAssembly
Browsers blocking identity in third-party contexts
JavaScript modernization: Promises/async/await etc
I’m still not sure what you’re referring to?
(As before: I work at Google, and am commenting only for myself.)