(Epistemic status: low and interested in disagreements)
My economic expectations for the next ten years are something like:
Examples of powerful AI misanswering basic questions continue for a while. For this and other reasons, trust in humans over AI persists in many domains for a long time after ASI is achieved.
Jobs become scarcer gradually. Humans remain at the helm for a while but the willingness to replace ones workers with AI slowly creeps its way up the chain. There is a general belief that Human + AI > AI + extra compute in many roles, and it is difficult to falsify this. Regulations take a long time to cut, causing some jobs to remain far beyond their usefulness. Humans continue to get very offended if they find out they are talking to an AI in business matters.
Money remains a thing for the next decade and enough people have jobs to avoid a completely alien economy. There is time to slowly transition to UBI and distribution of prosperity, but there is no guarantee this occurs.
Humans continue to get very offended if they find out they are talking to an AI
In my limited experience of phone contact with AIs, this is only true for distinctly subhuman AIs. Then I emotionally react like I am talking to someone who is being deliberately obtuse, and become enraged. I’m not entirely clear on why I have this emotional reaction, but it’s very strong. Perhaps it is related to the Uncanny Valley effect. On the other hand, I’ve dealt with phone AIs that (acted like they) understood me, and we’ve concluded a pleasant and businesslike interaction. I may be typical-minding here, but I suspect that most people will only take offense if they run into the first kind of AI.
Perhaps this is related: I felt a visceral uneasiness dealing with chat-mode LLMs, until I tried Claude, which I found agreeable and helpful. Now I have a claude.ai subscription. Once again, I don’t understand the emotional difference.
I’m 62 years old, which may have something to do with it. I can feel myself being less mentally flexible than I was decades ago, and I notice myself slipping into crotchety-old-man mode more often. It’s a problem that requires deliberate effort to overcome.
I think this matches my modal expectations—this is most likely, in my mind. I do give substantial minority probability (say, 20%) to more extreme and/or accelerated cases within a decade, and it becomes a minority of likelihood (say, 20% the other direction) over 2 or 3 decades.
My next-most-likely case is that there is enough middle- and upper-middle class disruption in employment and human-capital value that human currencies and capital ownership structures (stocks, and to a lesser extent, titles and court/police-enforced rulings) become confused. Food and necessities become scarce because the human systems of distribution break. Riots and looting destroy civilization. Possibly taking AI with it, possibly with the exception of some big data centers whose (human, with AI efficiency) staffers have managed to secure against the unrest—perhaps in cooperation with military units.
trust in humans over AI persists in many domains for a long time after ASI is achieved.
it may be that we’re just using the term superintelligence to mark different points, but if you mean strong superintelligence, the kind that could—after just being instantiated on earth, with no extra resources or help—find a route to transforming the sun if it wanted to: then i disagree for the reasons/background beliefs here.[1]
a value-aligned superintelligence directly creates utopia. an “intent-aligned” or otherwise non-agentic truthful superintelligence, if that were to happen, is most usefully used to directly tell you how to create a value-aligned agentic superintelligence.
(Epistemic status: low and interested in disagreements)
My economic expectations for the next ten years are something like:
Examples of powerful AI misanswering basic questions continue for a while. For this and other reasons, trust in humans over AI persists in many domains for a long time after ASI is achieved.
Jobs become scarcer gradually. Humans remain at the helm for a while but the willingness to replace ones workers with AI slowly creeps its way up the chain. There is a general belief that Human + AI > AI + extra compute in many roles, and it is difficult to falsify this. Regulations take a long time to cut, causing some jobs to remain far beyond their usefulness. Humans continue to get very offended if they find out they are talking to an AI in business matters.
Money remains a thing for the next decade and enough people have jobs to avoid a completely alien economy. There is time to slowly transition to UBI and distribution of prosperity, but there is no guarantee this occurs.
In my limited experience of phone contact with AIs, this is only true for distinctly subhuman AIs. Then I emotionally react like I am talking to someone who is being deliberately obtuse, and become enraged. I’m not entirely clear on why I have this emotional reaction, but it’s very strong. Perhaps it is related to the Uncanny Valley effect. On the other hand, I’ve dealt with phone AIs that (acted like they) understood me, and we’ve concluded a pleasant and businesslike interaction. I may be typical-minding here, but I suspect that most people will only take offense if they run into the first kind of AI.
Perhaps this is related: I felt a visceral uneasiness dealing with chat-mode LLMs, until I tried Claude, which I found agreeable and helpful. Now I have a claude.ai subscription. Once again, I don’t understand the emotional difference.
I’m 62 years old, which may have something to do with it. I can feel myself being less mentally flexible than I was decades ago, and I notice myself slipping into crotchety-old-man mode more often. It’s a problem that requires deliberate effort to overcome.
I think this matches my modal expectations—this is most likely, in my mind. I do give substantial minority probability (say, 20%) to more extreme and/or accelerated cases within a decade, and it becomes a minority of likelihood (say, 20% the other direction) over 2 or 3 decades.
My next-most-likely case is that there is enough middle- and upper-middle class disruption in employment and human-capital value that human currencies and capital ownership structures (stocks, and to a lesser extent, titles and court/police-enforced rulings) become confused. Food and necessities become scarce because the human systems of distribution break. Riots and looting destroy civilization. Possibly taking AI with it, possibly with the exception of some big data centers whose (human, with AI efficiency) staffers have managed to secure against the unrest—perhaps in cooperation with military units.
it may be that we’re just using the term superintelligence to mark different points, but if you mean strong superintelligence, the kind that could—after just being instantiated on earth, with no extra resources or help—find a route to transforming the sun if it wanted to: then i disagree for the reasons/background beliefs here.[1]
the relevant quote: