The innovation tree, overshadowed in the innovation forest
Cross-posted at Practical Ethics.
Many have pronounced that the era of innovation dead, peace be to its soul. From Tyler Cowen’s decree that we’ve picked all the low hanging fruit of innovation, through Robert Gordon’s idea that further innovation growth is threatened by “six headwinds”, to Gary Karparov’s and Peter Thiel’s theory that risk aversion has stifled innovation, there is no lack of predictions about the end of discovery.
I don’t propose to address the issue with something as practical and useful as actual data. Instead, staying true to my philosophical environment, I propose a thought experiment that hopefully may shed some light. The core idea is that we might be underestimating the impact of innovation because we have so much of it.
Imagine that technological innovation had for some reason stopped around the 1945 - with one exception: the CD and CD player/burner. Fast forwards a few decades, and visualise society. We can imagine a society completely dominated by the CD. We’d have all the usual uses for the CD—music, songs and similar—of course, but also much more.
Without mass television, the CD and radio would become the premier sources of entertainment for everyone. Newspapers would experiment with bundling CDs with their subscriptions, giving high-quality live sound recordings from major events. The walkman culture and all the transformations it brought would instead have been based around the portable CD player. Bosses at large companies would probably get into the habit of recording motivational messages and sending them to all employees. Syndicated columnists would record themselves on CDs to distribute to the fast-paced workers who didn’t have time to be at a radio at a certain time, or who were snobish about high quality audio. Underground movements would stay in contact with smuggled CDs (as would various youth cultures) and families would stay in touch with mailed CD messages. Various corporations would experiments with ways of sending data via CD, maybe by connecting them up to some sort of typing machine. CD-geek movements would learn to interpret the data just by listening, without needing a translation machine. Coded messages would be sent on CDs, and vast amounts of government secrets and mundane data would be stored on it. Many other ideas would no doubt spring up to use this ubiquitous technology—CD contracts, anyone?
Any social transformations of society would be enabled by the CD, or certainly blamed on it (“the anti-Barzil war riots of the 1960s were all the fault of permissive parents and those silver disks!”). Jobs would be lost and new ones created and columnists would enthuse or warn about the technology at length. In 2000, it would be unanimously voted the single most transformative technology of the 20th century. And then when the first proper computers began to be developed in 2001, they would be based initially on CD technology...
Such a transformative impact for what in our world is a rather mundane and obsolete technology! What’s different about our world that relegates the CD to such a low status? Well, we have more innovations. We have colour television, walkmans, VCRs, computers, ipods, mobile phones and many other technologies that overlap with the CD and can take over many of its roles. As a result we judge the CD to be a medium technology not because it isn’t so innovative, but because we have so many other innovations. Thus the more innovations we have, the less transformative each one is.
To really drive this point home, we could go back to our hypothetical CD-only world and remove older innovations like the radio and the telephone. Then the CD would take on more roles and be seen as more more innovative and more transformative than before. But again, we haven’t improved the CD in any way to get this change of perspective: we’ve just chopped down the other innovation trees to allow us to see how innovative the CD truly is. And the CD was chosen almost at random to illustrate the point; a similar picture would emerge if any other technology had been chosen as the “only big innovation since 1945”. They are all very innovative, but they get lost in so many other innovations.
Have you actually read The Great Stagnation? It presents a very convincing case. Highly recommended.
Cowen makes it pretty clear that he doesn’t think innovation has stopped completely, but that it is far slower than it was during the great era of growth from 1781 to 1970. He also thinks we may have faster growth in the future.
It seems to me that growth is so much slower because the things we demand at these margins are mostly services (healthcare and education being the fastest growing fields). It’s much easier to improve the efficiency of a factory than a nurse or a consultant or a university professor.
The argument that innovation has slowed is pretty simple. In the developed world, per capita output has to be driven by innovation. To some extent people can work harder, but that’s pretty limited. We can see that by the best measures we have, growth of per capita output has been slowing since the 70s and especially since 2000. That means that innovation is slowing. The counter argument is that the best methods of measuring output aren’t good enough, and that the newer forms of innovations are not captured as well as older forms of innovation. I guess that’s possible, but its not obvious. After all a lot of tech companies are worth a lot of money.
Not obvious? How many really amazing things can you get for free these days?
Like actually for free? Not many. Almost everything either requires me to be exposed to ads or to purchase something to be able to consume it. There is a free newspaper where I live, but that has ads. To access the internet I need to buy some sort of device and then pay someone for access to the internet. I can get internet for free in certain businesses or public places (libraries I guess), but in the case of libraries they are supported by tax dollars and the cost is part of GDP. For businesses, providing free wifi is a good way to get people in the door and/or sticking around to consume more overpriced coffee or whatever. So, none of these things are free. Certainly not more free than radio or TV were in the 20th century. The internet isn’t even more free than cable TV since both require ongoing payments in addition to buying the initial device.
That isn’t to say that I don’t feel like i am getting a good deal. I certainly do feel like a lot of things on the internet are good deals. But so what? a lot of things in real life are good deals also. Do internet/tech related businesses create more consumer surplus per dollar of revenue generated than more traditional businesses? I have no idea, but I’m confident that I can’t figure it out just by thinking about how amazing things are today.
Yes, point made—“free isn’t”. Now, how relevant is it to the point you made above?
You were doing econometrics, and this stuff isn’t absolutely invisible, but it’s much lighter-weight. I can read books and watch movies and send letters and get the news and publish stories for a very modest (0 marginal) monthly fee and watching some ads which, since this isn’t the boom days of the 90s, isn’t even all that much money behind the scenes.
In the old days, so much less of that would have happened, and it was great value—but because it’s so cheap, it’s discounted.
Basically: does the CPI (or whatever inflation measure you’re using) incorporate the (marginal) price of renting a video? … on Youtube? Does it include the price of getting a book? … from an author who put it on the net?
If people had and did all the things we do for each other now, but without any additional money flying around began putting up beautiful artwork everywhere that made everyone’s life an utter joy… the economic indicators wouldn’t notice.
See also this exercise from an economics textbook.
If we’re going to treat being exposed to ads as a sufficient condition for “not free,” then I’d think that any experience which presents even trivial inconveniences would have to be classified as not free, regardless of whether it costs any money. Seeing ads doesn’t actually induce an obligation to buy anything. Personally, I have never bought anything that I have received online ads for (I know that people tend to be very bad at remembering how ads influence their buying behavior, but ads tend to invoke in me a very strong contrarian response which actively prevents me from buying the products.)
Also, some things have cost in the sense that they’re supported by tax dollars or other communal funds, but these things are generally paid for whether you use them or not, so you incur no additional expense by using them.
Doesn’t this line of argument pretty much concede that few new inventions are innovative? Granted, it says each new invention would be a mindblowingly massive innovation if there were no other inventions, but in practice the marginal innovativeness of new inventions is small, just as (I’m assuming — I haven’t read Cowen’s book) the Great Stagnation thesis dictates.
This sentence seems to be missing either ‘over-’, ‘under-‘, or ‘mis-’ in front of ‘estimating’.
Corrected, thanks!
Well, simply no. Almost all proposed uses for CDs are rather impractical or would be popular only among some geeks. Without computers to use recorded data, CD means just a better walkman.
Don’t underrate walkmans. They were behind the Iranian revolution. Tape recordings allowed a cleric to spread his message to a much larger audience.
I was going to say… aren’t the USSR and Iran two of the paradigmatic cases for new media technologies helping enable revolutions?
You seem very certain of this. Could you develop your argument? Many technologies have ended up being used in ways that were not predicted or expected.
Now we know how magnitophones were used. Proposed things are technically possible with cassetes but were not that popular at all. There is no reason to expect that CD recorders would have been used in very different way.
I don’t think media played any significant role in USSR case.
I think Cowen et al are completely wrong. Innovation is not slowing down but rather speeding up. In particular, IT innovation is speeding up. There is an immense number of problems in society that are, at root, information problems. Car crashes are due to people not being able to react properly to information, “best-before”-dates are due to lack of information of whether a certain product is still fresh, etc, etc. Information technology is about to transform medicine, education, you name it.
Another subject that has progressed enormously in recent years is psychology. I think we’ve only seen the start of applications of this knowledge. If we start overcoming biases, expose egotism that destroy communal enterprises (brilliantly analyzed in this post http://lesswrong.com/lw/le/lost_purposes/ ), solve motivational issues on a large scale, etc, the effect on growth and human flourishing in general should be great.
There might be even more transformational innovations beyond next turn which we haven’t thought of. Hence I think the pessimism is mostly down to a lack of imagination. Also, another reason to believe in more innovation is that the last decades has seen an unprecedent expansion of capitalism (globally) and science: the traditional engines of growth. It would be surprising if this didn’t pay off.
The problem is, it was “about to transform” for the last thirty years, more or less.
… and it has transformed education.
I find it a bit hard to imagine how people learned things before Wikipedia and Stack Exchange and Google.
No, it did not. Go visit a high school or a college.
Learning became much easier. Education didn’t change much. Yet.
Even when the teachers heavily resists innovation Wikipedia still gets used by the students.
And yet they did. As such these things are less tranformative than you think.
The leading edge of medicine is already massively better than the leading edge 30 years ago.
All that needs to improve there is better distribution. That’s a problem to be solved, yes.
Given that overweight and diabetes are bigger problem than they were 30 years ago, how do you know that medicine is much better?
Lifespan probably lags behind as a metric.
Developing new drugs gets more and more expensive. http://pipeline.corante.com/archives/2012/03/08/erooms_law.php
If you look at the exponential rising price of developing new drugs I think that’s what you expect to see when innovation is slowing down in medicine.
Some parts are, some parts are not. Anything major happen in, say, autoimmune or degenerative diseases? Can medicine cure diabetes?
The proof of the pudding is in the eating—are the mortality numbers (or, better, life expectancy) “massively better” today than 30 years ago?
There has been major progress with at least one autoimmune disorder over the last 30 years. Even if there has not been an outright cure, the life expectancy of someone with HIV/AIDS, who can get treatment, is far greater now.
edit: clarifying, in case the downvoter didn’t understand the combination I responded to.
It seems that we gained roughly 6 years of life expectancy from 1950 to 1980 and we got the same six from 1980 to 2010. I used UK numbers.
No it is not and it will never be unless cure for aging is found. Most people die from old age now, even say a cure for all forms of cancer won’t change these statistics much.
What does that mean?
I already raised the problem of distribution, so we would need to find statistics on the difference between differing levels of care. I’m not sure how to dig those numbers up, even substituting wealth as a proxy for quality of care. Especially with various confounders taken out like smoking rates, which themselves vary over time.
I do note that there are a lot more very very old people than there used to be.
Also, there is a large lag. For instance, if you got Polio as a young child 70 years ago, you’re still living with that damage. A simple cure for HIV would barely have shown up in US statistics if we’d developed it as recently as 2000.
And then of course we get down to the other side of the claim you were making: that it’s been ‘about to massively improve for the past 30 years’. What massive improvements were we expecting ‘any moment now’, 30 years ago?
Increased understanding and education is probably another factor.
A man who has sex with men, for instance, could use these principles to inform the decision to partake in sex in a particular type of way, but not another
Whereas for anal, that’s the highest risk.
“Massively better” implies obviousness and ability to see plainly without reliance on statistics that need to be dug up.
Links?
The life expectancy for an 70-year old female in 1984 was 15.09 years, the same in 2014 is 15.6 years (source). I don’t see any massive improvement.
As to the promises of informatics for medicine, the main promise was personalized medicine—treatment based on your individual genetics and biochemistry with bespoke drugs made just for you. There’s certainly progress (e.g. gene sequencing became really cheap) but not much made its way into mainstream medicine yet.
You specifically asked for statistics! Then you blame me for looking for statistics? Or are you saying if my statistics need to be dug up (aren’t the simplest-to-get statistics) then that makes my claim wrong? No, it just makes my claim inconveniently-shaped for statistics, which is obvious by looking at it.
Then you use general-population statistics when I specifically said it was leading-edge stuff with minimal penetration. And of course much of that change is going to be swamped by population-health changes like smoking habits (better) and weight/sedentarism (worse).
Old folks? Numbers of very old people. Also, Centenarians is pretty suggestive.
You say ‘Personalized Medicine’ is a 30 year broken promise? It was first mentioned back in 1990 and then again in 2000. Link. This does not qualify as progress we’d been promised for 30 years, let alone a promise that was not delivered. Especially since, for the top-end cohort I was talking about, it HAS begun to be delivered. No one at all could get their genome sequenced 30 years ago, and if they had, no one would have had more than the slightest clue what to do with it. Now you can, and we’ve already found a bunch of useful things out. More are coming all the time. It’s too new to show up in statistics even for those who got it, since it’s mainly a way of looking ahead.
Endoscopic surgery was in its infancy in 1984, and has greatly improved in the mean time, meaning less trauma, shorter hospital stays, and improved outcomes. That’s even widespread now. Even outside of full-on endoscopic surgery, many conventional surgeries use much-shorter incisions (I have a 4 cm appendectomy scar).
Eye surgery went from only one procedure that sometimes fixed one problem, to common and able to reliably fix several problems. It’s not necessarily going to extend one’s life much directly, but that is a lot of QALYs.
Back in 1984, HIV was a death sentence. Sure, some of the improvement is from its evolving to be less aggressive, but we’ve worked out responses, when before there was effectively nothing.
Cancer treatments have far less side-effects now, and radiotherapy can reach deeper and so target more cancers. More QALYs again, though not so much an improvement in survival. Oh, and we have a vaccine for HPV which looks like it works well.
All sorts of tricks with bone marrow and blood stem cells came up only in the ’90s or later.
Edited to add: recent advances in prosthetics.
No, the companies who put those dates on products have an interest that you throw their products away to buy new ones. In the US they put those dates in products without being forced to do so by the government.
This can backfire if you overdo it if you’re not a monopolist: if I’m shopping for grocery and your product says “best before 10 September” whereas your competitor’s product says “best before 12 September”, all other things being equal I’ll buy the latter.