I understand the economic arguments for this position, and I do agree with them up to a point. Basic “companies have just gotten greedier” arguments certainly don’t work by themselves and require further questionable assumptions, like a monopoly or collusion between competing companies, or a consumer base that always accepts the price-quality tradeoffs.
But I don’t have the impression that this essay passes the Intellectual Turing Test of its opponents, nor that it mentions or argues against their strongest arguments. Was it written in a spirit of genuine curiosity of whether the claim was true, or was its bottom line already written in advance?
Anyway, here are some stronger arguments for “consumer products are getting worse” that sound reasonable to me:
Modern review culture (on Amazon, Youtube, etc.) reviews tons of products directly after they’re newly available (e.g. because Google and other feeds promote new content over old content). So reviews optimize for criteria they can immediately check and neglect those they can’t, like most types of durability (whereas e.g. “does this smartphone display get scratched by the keys in my pocket” is easy to show in a 5-minute video review). I can’t find it now, but I remember an essay about pocket knives, which argued that this review style even leads to the invention of some new review criteria, like a pointless “flick test” for pocket knives (i.e. how satisfying is it to open & close the knife quickly), because this can be demonstrated quickly in a video, whereas things we might actually care about (does this knife cut well, is it injury-prone, will it rust within a year) can’t be tested quickly and so are pretty much ignored.
Maybe we’ve just gotten worse at doing stuff. Jonathan Blow argues this point extensively on the topic of software development and programming (here is a 1-hour-talk on the topic): e.g. supposedly very low output per programmer at companies like Facebook or Twitter, relative to their exceedingly modest engineering challenges; slow and buggy websites and apps; endlessly increasing levels of abstraction and the growing “technology stack” of modern websites; software getting worse which negates much of the improvement in hardware performance from Moore’s Law; etc. It’s not clear here whether we’re just making worse software because it costs less and is considered “good enough” by consumers, or whether our civilisation has become unable to obtain quality even if it needed to optimize for it.
Or maybe we haven’t gotten worse at making products but have been forced to make worse ones, e.g. due to regulation imposing bad design restrictions.
I understand the economic arguments for this position, and I do agree with them up to a point. Basic “companies have just gotten greedier” arguments certainly don’t work by themselves and require further questionable assumptions, like a monopoly or collusion between competing companies, or a consumer base that always accepts the price-quality tradeoffs.
But I don’t have the impression that this essay passes the Intellectual Turing Test of its opponents, nor that it mentions or argues against their strongest arguments. Was it written in a spirit of genuine curiosity of whether the claim was true, or was its bottom line already written in advance?
Anyway, here are some stronger arguments for “consumer products are getting worse” that sound reasonable to me:
Modern review culture (on Amazon, Youtube, etc.) reviews tons of products directly after they’re newly available (e.g. because Google and other feeds promote new content over old content). So reviews optimize for criteria they can immediately check and neglect those they can’t, like most types of durability (whereas e.g. “does this smartphone display get scratched by the keys in my pocket” is easy to show in a 5-minute video review). I can’t find it now, but I remember an essay about pocket knives, which argued that this review style even leads to the invention of some new review criteria, like a pointless “flick test” for pocket knives (i.e. how satisfying is it to open & close the knife quickly), because this can be demonstrated quickly in a video, whereas things we might actually care about (does this knife cut well, is it injury-prone, will it rust within a year) can’t be tested quickly and so are pretty much ignored.
Maybe we’ve just gotten worse at doing stuff. Jonathan Blow argues this point extensively on the topic of software development and programming (here is a 1-hour-talk on the topic): e.g. supposedly very low output per programmer at companies like Facebook or Twitter, relative to their exceedingly modest engineering challenges; slow and buggy websites and apps; endlessly increasing levels of abstraction and the growing “technology stack” of modern websites; software getting worse which negates much of the improvement in hardware performance from Moore’s Law; etc. It’s not clear here whether we’re just making worse software because it costs less and is considered “good enough” by consumers, or whether our civilisation has become unable to obtain quality even if it needed to optimize for it.
Or maybe we haven’t gotten worse at making products but have been forced to make worse ones, e.g. due to regulation imposing bad design restrictions.