“There’s a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying.”
“We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply.”
As someone working in special-purpose software rather than general-purpose AI, I think you drastically overestimate the difficulty of outcompeting humans in significant portions of low-wage jobs.
“The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and...just moves atoms around into whatever molecular structures or large-scale structures it wants....The human species would end up disassembled for spare atoms”
I also think you overestimate the ease of fooming. Computers are already helping us design themselves (see http://www.qwantz.com/index.php?comic=2406), and even a 300 IQ AI will be starting from the human knowledge base and competing with microbes for chemical energy at the nano scale and humans for energy at the macro scale. I think that a 300-IQ AI dropped on earth today would take five years to dominate scientific output.
300 IQ is 10 standard deviations above the mean. So picture a trillion planets each with a trillion humans on them and take the smartest person out of all of this and transport him to our reality and make it very easy for him to quickly clone himself. Do you really think it would take this guy five full years to dominate scientific output?
So picture a trillion planets each with a trillion humans on them
There is almost no way this hypothetical provokes accurate intuitions about a 300 IQ. It’s hard to ask someone to picture something they are literally incapable of picturing and I suspect people hearing this will just default to “someone a little smarter than the smartest person I know of”.
I know I’m doing that and I can’t stop doing it. “A trillion planets each with a trillion humans on them” is something important, but I can’t visualize it at all.
As someone working in special-purpose software rather than general-purpose AI, I think you drastically overestimate the difficulty of outcompeting humans in significant portions of low-wage jobs.
Plenty of low-wage jobs have been automated away by machines over the last four centuries. You don’t end up permanently irrevocably unemployed until all the work you can do has been automated away.
The big thing here is that as we had the massive productivity increases of the industrial revolution and the second industrial revolution, we had a corresponding dramatic increase in consumption to match the productivity increase, which kept employment steady; even if productivity increases by a factor of 20, it doesn’t cause unemployment if consumption also increases by a factor of 20, which is basically what happened over the course of the 19th and 20th centuries.
I’m not sure that’s going to happen again, though. It might, of course, but if productivity increases and (for whatever reason) consumption doesn’t continue to increase by the same factor, it would tend to cause unemployment.
This would ordinarily be diagnosed as an aggregate demand deficit and solved with additional money—it falls under the category of things that NGDP level targeting ought to solve unless there is something not further specified going on.
A genuine (possibly very stupid) question, since I have practically no knowledge about macroeconomics: when I think of my own preferences, I feel like I pretty much already have all the things that can be bought with money and that I might want. Yes, I would like to have somewhat more money, but mostly so that I could increase my savings to give me more of a safety cushion in case I ever need it, and of course to donate to altruistic aims. If I did start earning a lot more, there are very few things that I imagine would change WRT my own quality of life: maybe I’d eat out a little more, and possibly visit friends in distant countries more often, but for the most part I just don’t have any desires that I’m currently unable to fulfill because of not having the money for it.
Now my question is, if it happened that I was actually the typical case, who basically had no unfilled preferences of the kind that could be filled with extra money—and I freely admit that I might be very atypical in this respect, but supposing that I wasn’t… then how would extra money solve the resulting lack of growth in demand, if everyone was basically already content with what they had?
Ah, right. The first response that came to mind was “well, I might already have everything that I want, but what about those poor or unemployed folks we’re worried about”—but of course, if there are such people with unsatisfied desires, then obviously that means that there’s still an unmet demand that the increased production can help meet, and the extra money is so that the poor people can actually buy the fruits of that additional production? Thanks, that makes sense.
Don’t forget about status goods. It’s pretty much hardwired into humans to be competitive and one of the ways to compete is by having a bigger/shinier/better thing. Note: it’s comparative, not absolute, so you can (and do) get into status fights which have no natural stopping point. Your desire is to be bigger than the other guy, and he has the same desire, so you both just escalate.
For a real-life example look at superyachts owned by billionaires :-)
Do there exist any studies on how much money people actually spend on pure status pursuits? People keep mentioning that phenomenon, so I must assume that it exists, but I practically never seem to run into it in my own life, so I’m curious to what extent that’s just me living in a bubble or not recognizing such purchases. (People putting money into stuff like collectible games does come close, but even that feels more like spending money on a hobby rather than on pure status, especially given that I used to enjoy collecting some CCGs even when I didn’t know of anyone else who played them.)
Do there exist any studies on how much money people actually spend on pure status pursuits?
I don’t know but I also think that such studies would have major problems with data. In a way, whether you’re buying utility or status is about intent. Let’s say I like fishing and want a new boat. I can buy a 12′ boat or a 18′ boat. The 18′ one is more powerful, convenient, but also more expensive. It’s also bigger than my neighbor’s 16′ boat. I pick the 18′ boat—how do you determine which role my desire to trump my neighbor played in my decision to get the bigger one?
In practical terms status competitions seem to take off when people have nothing useful to do with their money (aka once they personally pass into post-scarcity era). Or, of course, if they really want status.
Well, I can tell you that as a general rule, if you give more money to the rich, they do not spend much more money then they would anyway. There has been economics research on this; basically, if you’re trying to stimulate the economy during a recession, the govenrment can spend more money, or it can give tax breaks to the poor, the middle class, or the rich. Out of all of those options, giving tax breaks to the rich has the smallest stimulus impact, because if someone is already rich then increasing their income doesn’t affect their spending very much. It has some impact, but it’s small.
Sure. If you want to look up economic research about this, the main thing you would want to to look for is what they call the MPC, the “marginal propensity to consume”; that is, if you add 1 more dollar to someone’s income, how much will their consumption increase. It’s generally somewhere between 1 and 0, 1 being “you give someone another dollar in income and they spend all of it” and 0 being “the spend none of it”. Generally speaking, MPC tends to decline the more income someone has.
Here was one study, done in Italy in 2012 on the subject.
We find that households with low cash-on-hand exhibit a much higher MPC than affluent households, which is in agreement with models with precautionary savings where income risk plays an important role. The results have important implications for the evaluation of fiscal policy, and for predicting household responses to tax reforms and redistributive policies. In particular, we find that a debt-financed increase in transfers
of 1 percent of national disposable income targeted to the bottom decile of the cash-on-hand distribution would increase aggregate consumption by 0.82 percent. Furthermore, we find that redistributing 1% of national disposable from the top to the bottom decile of the income distribution would boost aggregate consumption by 0.1%.
It’s worth mentioning that while the “marginal propensity to consume declines with income” idea is something that was assumed by Keynes and is part of Keynesian economic, it is something that others have contested. There is a lot of debate, for example, on the difference between “windfall income” and “permanent income” and how each affects MPC. But in general, if you look in most economic textbooks, the model you usually see is a sloping curve, where as income goes up MPC drops; it never goes quite to zero, there is usually some increase in consumption as you increase income, but it falls quite close to zero as income rises. The consumption function usually looks something like this:
Perhaps. In practice, though, it seems like there does seem to come a point where adding more money doesn’t necessarily increase demand significantly; if you already have a million dollars a year income, and that increases by a factor of 20, you probably aren’t going to use 20 times as many consumer goods or services.
Maybe advances in technology are going to create enough new types goods and services so that increasing demand can keep up with increases in production, but I’m not sure that that’s guaranteed to happen. If it doesn’t, it seems like there is some upper limit where enough stuff is produced for everyone without needing more then a fraction of the population to produce it.
You don’t end up permanently irrevocably unemployed until all the work you can do has been automated away.
And in a world where all work CAN be automated, human service can still exist side-by-side. A robot might be able to cut my hair, but I’d pay a premium to have a person do it because I enjoy the experience (I sometimes pay for barber shaves before job interviews rather than do it myself). Similarly, I’d probably pay a premium for an actual bartender over a barmonkey type robot in many settings. I pay a premium over Amazon at the nearby bookstore because I enjoy the old medieval history phd who runs the shop and his conversations/recommendations,etc. I can imagine a world full of robots where face-to-face service becomes the luxury item.
That is very plausibly a world in which unemployment is massively higher than today, if sentiment is the only remaining reason to employ humans at anything; and a world in which a few capital-holders are the only ones who can afford to employ all these premium human hairdressers etcetera. If this is how things end up, then I would call my thesis falsified, and admit that the view I criticized was correct.
If this happens, then some of the robots will start to look and behave exactly like humans. Robot prostitutes would look like human supermodels. This’ll cause more unemployment.
Don’t underestimate humans’ desire for authenticity. As an example, note that even nowadays, some people do pay extra for handcrafted knickknacks and such like. You can say its a silly desire, but it’s what they want. If you said to them “Hey, want to buy this factory made knickknack? It looks just like a handmade one.” they would for the most part, just turn you down. For better or worse, the desire for authenticity seems to be a deep part of humanity’s utility function.
Or look at the well known thought experiment of the transporter device. You step in, it scans your body, disintegrates your body, sends the message of what your body was like to the destination transporter, which then reconstructs you, exactly like you were before. Most humans express serious misgivings about going through one of those. They feel it wouldn’t be “the real them” anymore. Is that silly? Yes. But it reflects our human desire for “authenticity”.
Or until the supply of low-skill workers depress the remaining low-skill wage beneath minimum wage/outsourcing. I think that we are eliminating a larger proportion of low-skill jobs per year than we ever have before, but I agree that the retraining and regulation issues you pointed out are significant.
I think that a 300-IQ AI dropped on earth today would take five years to dominate scientific output.
I would estimate even longer- a lot of science’s rate limiting steps involve simple routine work that is going to be hard to speed up. Think about the extreme cutting edge- how much could an IQ-300 AI speed up the process of physically building something like the LHC?
Have you spent much time working in labs? Its been my experience that most of the work is data collection, where the process you are collecting data on is the limiting factor. Honestly can’t think of any lab I’ve been apart of where data collection was not the rate limiting step.
Here are the first examples that popped into my head:
Consider Lenski’s work on E.coli. It took from 1988-2010 to get to 50k generations (and is going). The experimental design phase and data analysis here are minimal in length compared to the time it takes e.coli to grow and breed.
It took 3 years to go from the first potential top quark events on record (1992) to actual discovery (1995). This time was just waiting for enough events to build up (I’m ignoring the 20 years between prediction and first-events because maybe a super-intelligence could have somehow narrowed down the mass range to explore, I’m also ignoring the time required to actually build an accelerator, thats 3 years of just letting the machine run).
Depending on what you are looking for, timescales in NMR collection are weeks to months. If your signal is small, you might need dozens of these runs.
Also, anyone who has ever worked with a low temperature system can tell you that keeping the damn thing working is a huge time sink. So you could add ‘necessary machine maintenance’ to these sorts of tasks. Its not obvious to me that leak checking your cryonics setup to troubleshoot can be sped up much by higher IQ.
Thank you for the examples, I see your point. I can imagine ways 300-IQ AIs would accelerate some of these that sound plausible to me, but since I don’t really have direct experience that might not mean much.
That said, I notice that the bluej’s post mentioned the AI dominating scientific output, not necessarily increasing its rate by much. Of course, a single AI instance would not dominate science—as evidenced by the fact that the few ~200 IQ humans that existed didn’t claim a big part—but an AI architecture that can be easily replicated might. After all, at least as far as IQ is concerned, anyone who hires an IQ 140–160 scientist now would just use an IQ 300 AI instead.
Of course, science is not just IQ, and even if IBM’s Watson had IQ 300 right now and I doubt enough instances of it would be built in five years to replace all scientists simply due to hardware costs (not to mention licensing and patent wars). But then again I don’t have a very good feel for the relative cost of humans and hardware for things the size of Google, so I don’t have very high confidence either way. But certainly 20 to 30 years would change the landscape hugely.
“There’s a thesis (whose most notable proponent I know is Peter Thiel, though this is not exactly how Thiel phrases it) that real, material technological change has been dying.”
Tyler Cowen is again relevant here with his http://www.amazon.com/The-Great-Stagnation-Low-Hanging-ebook/dp/B004H0M8QS , though I think he considers it less cultural than Thiel does.
“We only get the Hansonian scenario if AI is broadly, steadily going past IQ 70, 80, 90, etc., making an increasingly large portion of the population fully obsolete in the sense that there is literally no job anywhere on Earth for them to do instead of nothing, because for every task they could do there is an AI algorithm or robot which does it more cheaply.”
As someone working in special-purpose software rather than general-purpose AI, I think you drastically overestimate the difficulty of outcompeting humans in significant portions of low-wage jobs.
“The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and...just moves atoms around into whatever molecular structures or large-scale structures it wants....The human species would end up disassembled for spare atoms”
I also think you overestimate the ease of fooming. Computers are already helping us design themselves (see http://www.qwantz.com/index.php?comic=2406), and even a 300 IQ AI will be starting from the human knowledge base and competing with microbes for chemical energy at the nano scale and humans for energy at the macro scale. I think that a 300-IQ AI dropped on earth today would take five years to dominate scientific output.
300 IQ is 10 standard deviations above the mean. So picture a trillion planets each with a trillion humans on them and take the smartest person out of all of this and transport him to our reality and make it very easy for him to quickly clone himself. Do you really think it would take this guy five full years to dominate scientific output?
There is almost no way this hypothetical provokes accurate intuitions about a 300 IQ. It’s hard to ask someone to picture something they are literally incapable of picturing and I suspect people hearing this will just default to “someone a little smarter than the smartest person I know of”.
I know I’m doing that and I can’t stop doing it. “A trillion planets each with a trillion humans on them” is something important, but I can’t visualize it at all.
I’m picturing someone with the optimization power of the entire human civilization, which seems a little more tractable.
It’s also based on nothing whatsoever, but it’s at least in the right direction? I hope.
Plenty of low-wage jobs have been automated away by machines over the last four centuries. You don’t end up permanently irrevocably unemployed until all the work you can do has been automated away.
The big thing here is that as we had the massive productivity increases of the industrial revolution and the second industrial revolution, we had a corresponding dramatic increase in consumption to match the productivity increase, which kept employment steady; even if productivity increases by a factor of 20, it doesn’t cause unemployment if consumption also increases by a factor of 20, which is basically what happened over the course of the 19th and 20th centuries.
I’m not sure that’s going to happen again, though. It might, of course, but if productivity increases and (for whatever reason) consumption doesn’t continue to increase by the same factor, it would tend to cause unemployment.
This would ordinarily be diagnosed as an aggregate demand deficit and solved with additional money—it falls under the category of things that NGDP level targeting ought to solve unless there is something not further specified going on.
A genuine (possibly very stupid) question, since I have practically no knowledge about macroeconomics: when I think of my own preferences, I feel like I pretty much already have all the things that can be bought with money and that I might want. Yes, I would like to have somewhat more money, but mostly so that I could increase my savings to give me more of a safety cushion in case I ever need it, and of course to donate to altruistic aims. If I did start earning a lot more, there are very few things that I imagine would change WRT my own quality of life: maybe I’d eat out a little more, and possibly visit friends in distant countries more often, but for the most part I just don’t have any desires that I’m currently unable to fulfill because of not having the money for it.
Now my question is, if it happened that I was actually the typical case, who basically had no unfilled preferences of the kind that could be filled with extra money—and I freely admit that I might be very atypical in this respect, but supposing that I wasn’t… then how would extra money solve the resulting lack of growth in demand, if everyone was basically already content with what they had?
If everyone already has everything they want, your economy is solved.
Ah, right. The first response that came to mind was “well, I might already have everything that I want, but what about those poor or unemployed folks we’re worried about”—but of course, if there are such people with unsatisfied desires, then obviously that means that there’s still an unmet demand that the increased production can help meet, and the extra money is so that the poor people can actually buy the fruits of that additional production? Thanks, that makes sense.
Don’t forget about status goods. It’s pretty much hardwired into humans to be competitive and one of the ways to compete is by having a bigger/shinier/better thing. Note: it’s comparative, not absolute, so you can (and do) get into status fights which have no natural stopping point. Your desire is to be bigger than the other guy, and he has the same desire, so you both just escalate.
For a real-life example look at superyachts owned by billionaires :-)
Do there exist any studies on how much money people actually spend on pure status pursuits? People keep mentioning that phenomenon, so I must assume that it exists, but I practically never seem to run into it in my own life, so I’m curious to what extent that’s just me living in a bubble or not recognizing such purchases. (People putting money into stuff like collectible games does come close, but even that feels more like spending money on a hobby rather than on pure status, especially given that I used to enjoy collecting some CCGs even when I didn’t know of anyone else who played them.)
I don’t know but I also think that such studies would have major problems with data. In a way, whether you’re buying utility or status is about intent. Let’s say I like fishing and want a new boat. I can buy a 12′ boat or a 18′ boat. The 18′ one is more powerful, convenient, but also more expensive. It’s also bigger than my neighbor’s 16′ boat. I pick the 18′ boat—how do you determine which role my desire to trump my neighbor played in my decision to get the bigger one?
In practical terms status competitions seem to take off when people have nothing useful to do with their money (aka once they personally pass into post-scarcity era). Or, of course, if they really want status.
Look at what Russian oligarchs are buying. Look at what the Chinese are building (see e.g. http://edition.cnn.com/2013/07/24/world/asia/china-government-building-ban/?hpt=ias_c2). Do you think Dubai built its tallest building just because they wanted so much office space on so little land?
Well, I can tell you that as a general rule, if you give more money to the rich, they do not spend much more money then they would anyway. There has been economics research on this; basically, if you’re trying to stimulate the economy during a recession, the govenrment can spend more money, or it can give tax breaks to the poor, the middle class, or the rich. Out of all of those options, giving tax breaks to the rich has the smallest stimulus impact, because if someone is already rich then increasing their income doesn’t affect their spending very much. It has some impact, but it’s small.
May I have a few links? I’d like to examine the research on this in more detail.
Sure. If you want to look up economic research about this, the main thing you would want to to look for is what they call the MPC, the “marginal propensity to consume”; that is, if you add 1 more dollar to someone’s income, how much will their consumption increase. It’s generally somewhere between 1 and 0, 1 being “you give someone another dollar in income and they spend all of it” and 0 being “the spend none of it”. Generally speaking, MPC tends to decline the more income someone has.
Here was one study, done in Italy in 2012 on the subject.
http://www.stanford.edu/~pista/MPC.pdf
It’s worth mentioning that while the “marginal propensity to consume declines with income” idea is something that was assumed by Keynes and is part of Keynesian economic, it is something that others have contested. There is a lot of debate, for example, on the difference between “windfall income” and “permanent income” and how each affects MPC. But in general, if you look in most economic textbooks, the model you usually see is a sloping curve, where as income goes up MPC drops; it never goes quite to zero, there is usually some increase in consumption as you increase income, but it falls quite close to zero as income rises. The consumption function usually looks something like this:
http://en.wikipedia.org/wiki/File:MPC.png
Perhaps. In practice, though, it seems like there does seem to come a point where adding more money doesn’t necessarily increase demand significantly; if you already have a million dollars a year income, and that increases by a factor of 20, you probably aren’t going to use 20 times as many consumer goods or services.
Maybe advances in technology are going to create enough new types goods and services so that increasing demand can keep up with increases in production, but I’m not sure that that’s guaranteed to happen. If it doesn’t, it seems like there is some upper limit where enough stuff is produced for everyone without needing more then a fraction of the population to produce it.
And in a world where all work CAN be automated, human service can still exist side-by-side. A robot might be able to cut my hair, but I’d pay a premium to have a person do it because I enjoy the experience (I sometimes pay for barber shaves before job interviews rather than do it myself). Similarly, I’d probably pay a premium for an actual bartender over a barmonkey type robot in many settings. I pay a premium over Amazon at the nearby bookstore because I enjoy the old medieval history phd who runs the shop and his conversations/recommendations,etc. I can imagine a world full of robots where face-to-face service becomes the luxury item.
That is very plausibly a world in which unemployment is massively higher than today, if sentiment is the only remaining reason to employ humans at anything; and a world in which a few capital-holders are the only ones who can afford to employ all these premium human hairdressers etcetera. If this is how things end up, then I would call my thesis falsified, and admit that the view I criticized was correct.
If this happens, then some of the robots will start to look and behave exactly like humans. Robot prostitutes would look like human supermodels. This’ll cause more unemployment.
Don’t underestimate humans’ desire for authenticity. As an example, note that even nowadays, some people do pay extra for handcrafted knickknacks and such like. You can say its a silly desire, but it’s what they want. If you said to them “Hey, want to buy this factory made knickknack? It looks just like a handmade one.” they would for the most part, just turn you down. For better or worse, the desire for authenticity seems to be a deep part of humanity’s utility function.
Or look at the well known thought experiment of the transporter device. You step in, it scans your body, disintegrates your body, sends the message of what your body was like to the destination transporter, which then reconstructs you, exactly like you were before. Most humans express serious misgivings about going through one of those. They feel it wouldn’t be “the real them” anymore. Is that silly? Yes. But it reflects our human desire for “authenticity”.
Or until the supply of low-skill workers depress the remaining low-skill wage beneath minimum wage/outsourcing. I think that we are eliminating a larger proportion of low-skill jobs per year than we ever have before, but I agree that the retraining and regulation issues you pointed out are significant.
Well, there’s an obvious solution for that.
Yes, inflation.
I don’t think he can hear you across the inferential chasm.
Could you point me in the direction of a bridge?
I would estimate even longer- a lot of science’s rate limiting steps involve simple routine work that is going to be hard to speed up. Think about the extreme cutting edge- how much could an IQ-300 AI speed up the process of physically building something like the LHC?
Could you give three examples? (I’m not trying to be a wise-ass, I actually thought about it and couldn’t find any solid ones.)
Have you spent much time working in labs? Its been my experience that most of the work is data collection, where the process you are collecting data on is the limiting factor. Honestly can’t think of any lab I’ve been apart of where data collection was not the rate limiting step.
Here are the first examples that popped into my head:
Consider Lenski’s work on E.coli. It took from 1988-2010 to get to 50k generations (and is going). The experimental design phase and data analysis here are minimal in length compared to the time it takes e.coli to grow and breed.
It took 3 years to go from the first potential top quark events on record (1992) to actual discovery (1995). This time was just waiting for enough events to build up (I’m ignoring the 20 years between prediction and first-events because maybe a super-intelligence could have somehow narrowed down the mass range to explore, I’m also ignoring the time required to actually build an accelerator, thats 3 years of just letting the machine run).
Depending on what you are looking for, timescales in NMR collection are weeks to months. If your signal is small, you might need dozens of these runs.
Also, anyone who has ever worked with a low temperature system can tell you that keeping the damn thing working is a huge time sink. So you could add ‘necessary machine maintenance’ to these sorts of tasks. Its not obvious to me that leak checking your cryonics setup to troubleshoot can be sped up much by higher IQ.
No, I did not, and it shows :-)
Thank you for the examples, I see your point. I can imagine ways 300-IQ AIs would accelerate some of these that sound plausible to me, but since I don’t really have direct experience that might not mean much.
That said, I notice that the bluej’s post mentioned the AI dominating scientific output, not necessarily increasing its rate by much. Of course, a single AI instance would not dominate science—as evidenced by the fact that the few ~200 IQ humans that existed didn’t claim a big part—but an AI architecture that can be easily replicated might. After all, at least as far as IQ is concerned, anyone who hires an IQ 140–160 scientist now would just use an IQ 300 AI instead.
Of course, science is not just IQ, and even if IBM’s Watson had IQ 300 right now and I doubt enough instances of it would be built in five years to replace all scientists simply due to hardware costs (not to mention licensing and patent wars). But then again I don’t have a very good feel for the relative cost of humans and hardware for things the size of Google, so I don’t have very high confidence either way. But certainly 20 to 30 years would change the landscape hugely.
Yeah, exactly. Especially if you take Cowen’s view that science requires increasing marginal effort.