The Hanson-Yudkowsky AI-Foom Debate focused on whether AI progress is winner-take-all. But even if it isn’t, humans might still fare badly.
Suppose Robin is right. Instead of one basement project going foom, AI progresses slowly as many organizations share ideas with each other, leading to peaceful economic growth worldwide—a rising tide of AI. (I’m including uploads in that.)
With time, keeping biological humans alive will become a less and less profitable use of resources compared to other uses. Robin says humans can still thrive by owning a lot of resources, as long as property rights prevent AIs from taking resources by force.
But how long is that? Recall the displacement of nomadic civilizations by farming ones (which happened by force, not farmers buying land from nomads) or enclosure in England (which also happened by force). When potential gains in efficiency become large enough, property rights get trampled.
Robin argues that it won’t happen, because it would lead to a slippery slope of AIs fighting each other for resources. But the potential gains from that are smaller, like a landowner trying to use enclosure on another landowner. And most such gains can be achieved by AIs sharing improvements, which is impossible with humans. So AIs won’t be worried about that slippery slope, and will happily take our resources by force.
Maybe humans owning resources could upload themselves and live off rent, instead of staying biological? But even uploaded humans might be very inefficient users of resources (e.g. due to having too many neurons) compared to optimized AIs, so the result is the same.
Instead of hoping that institutions like property rights will protect us, we should assume that everything about the future, including institutions, will be determined by the values of AIs. To achieve our values, working on AI alignment is necessary, whether we face a “basement foom” or “rising tide” scenario.
Biological humans and the rising tide of AI
The Hanson-Yudkowsky AI-Foom Debate focused on whether AI progress is winner-take-all. But even if it isn’t, humans might still fare badly.
Suppose Robin is right. Instead of one basement project going foom, AI progresses slowly as many organizations share ideas with each other, leading to peaceful economic growth worldwide—a rising tide of AI. (I’m including uploads in that.)
With time, keeping biological humans alive will become a less and less profitable use of resources compared to other uses. Robin says humans can still thrive by owning a lot of resources, as long as property rights prevent AIs from taking resources by force.
But how long is that? Recall the displacement of nomadic civilizations by farming ones (which happened by force, not farmers buying land from nomads) or enclosure in England (which also happened by force). When potential gains in efficiency become large enough, property rights get trampled.
Robin argues that it won’t happen, because it would lead to a slippery slope of AIs fighting each other for resources. But the potential gains from that are smaller, like a landowner trying to use enclosure on another landowner. And most such gains can be achieved by AIs sharing improvements, which is impossible with humans. So AIs won’t be worried about that slippery slope, and will happily take our resources by force.
Maybe humans owning resources could upload themselves and live off rent, instead of staying biological? But even uploaded humans might be very inefficient users of resources (e.g. due to having too many neurons) compared to optimized AIs, so the result is the same.
Instead of hoping that institutions like property rights will protect us, we should assume that everything about the future, including institutions, will be determined by the values of AIs. To achieve our values, working on AI alignment is necessary, whether we face a “basement foom” or “rising tide” scenario.