I agree that in a fast takeoff scenario there’s little reason for an AI system to operate withing existing societal structures, as it can outgrow them quicker than society can adapt. I’m personally fairly skeptical of fast takeoff (<6 months say) but quite worried that society may be slow enough to adapt that even years of gradual progress with a clear sign that transformative AI is on the horizon may be insufficient.
In terms of humans “owning” the economy but still having trouble getting what they want, it’s not obvious this is a worse outcome than the society we have today. Indeed this feels like a pretty natural progression of human society. Humans already interact with (and not so infrequently get tricked or exploited by) entities smarter than them such as large corporations or nation states. Yet even though I sometimes find I’ve bought a dud on the basis of canny marketing, overall I’m much better off living in a modern capitalist economy than the stone age where humans were more directly in control.
However, it does seem like there’s a lot of value lost in the scenario where humans become increasingly disempowered, even if their lives are still better than in 2022. From a total utilitarian perspective, “slightly better than 2022” and “all humans dead” are rounding errors relative to “possible future human flourishing”. But things look quite different under other ethical views, so I’m reluctant to conflate these outcomes.
I think such a natural progression could also lead to something similar to extinction (in addition to permanently curtailing humanity’s potential). E.g., maybe we are currently in a regime where optimizing proxies harder still leads to improvements to the true objective, but this could change once we optimize those proxies even more. The natural progression could follow an inverted U-shape.
E.g., take the marketing example. Maybe we will get superhuman persuasion AIs, but also AIs that protect us from persuasive ads and AIs that can provide honest reviews. It seems unclear whether these things would tend to balance out, or whether e.g. everyone will inevitably be exposed to some persuasion that causes irreparable damage. Of course, things could also work out better than expected, if our ability to keep AIs in check scales better than dangerous capabilities.
I agree that in a fast takeoff scenario there’s little reason for an AI system to operate withing existing societal structures, as it can outgrow them quicker than society can adapt. I’m personally fairly skeptical of fast takeoff (<6 months say) but quite worried that society may be slow enough to adapt that even years of gradual progress with a clear sign that transformative AI is on the horizon may be insufficient.
In terms of humans “owning” the economy but still having trouble getting what they want, it’s not obvious this is a worse outcome than the society we have today. Indeed this feels like a pretty natural progression of human society. Humans already interact with (and not so infrequently get tricked or exploited by) entities smarter than them such as large corporations or nation states. Yet even though I sometimes find I’ve bought a dud on the basis of canny marketing, overall I’m much better off living in a modern capitalist economy than the stone age where humans were more directly in control.
However, it does seem like there’s a lot of value lost in the scenario where humans become increasingly disempowered, even if their lives are still better than in 2022. From a total utilitarian perspective, “slightly better than 2022” and “all humans dead” are rounding errors relative to “possible future human flourishing”. But things look quite different under other ethical views, so I’m reluctant to conflate these outcomes.
I think such a natural progression could also lead to something similar to extinction (in addition to permanently curtailing humanity’s potential). E.g., maybe we are currently in a regime where optimizing proxies harder still leads to improvements to the true objective, but this could change once we optimize those proxies even more. The natural progression could follow an inverted U-shape.
E.g., take the marketing example. Maybe we will get superhuman persuasion AIs, but also AIs that protect us from persuasive ads and AIs that can provide honest reviews. It seems unclear whether these things would tend to balance out, or whether e.g. everyone will inevitably be exposed to some persuasion that causes irreparable damage. Of course, things could also work out better than expected, if our ability to keep AIs in check scales better than dangerous capabilities.